filename
stringlengths 18
35
| content
stringlengths 1.53k
616k
| source
stringclasses 1
value | template
stringclasses 1
value |
---|---|---|---|
1218_HarmonicSS_731944.md
|
# Summary
This document presents the data management plan for the HarmonicSS project.
More specifically, it provides the actions to be taken and the processes to be
followed for managing the data collected and generated during the project’s
lifetime. HarmonicSS brings together and will integrate datasets from _22
entities_ _across Europe and the USA_ . The datasets to be collected and
interlinked into the HarmonicSS integrated cohort are _23_ in total and come
from _11 different countries_ (being Germany, Italy, Belgium, UK, Spain,
Norway, Sweden, Greece, France, the Netherlands, USA), including pSS-related
data for more than _11680 patients (expected to reach up to 12500 patients by
the end of the project)_ .
This report covers: a) the description of the data to be collected, processed
and generated, b) the data collection procedure followed including the
methodology and standards which will be applied, c) whether data will be
shared/made open and how, d) the storage and backup processes and e) the data
curation and preservation activities. Moreover, the specific responsibilities
for the different data management activities are described and any relevant
institutional, departmental or study policies on data sharing and data
security are presented.
# Introduction
This report describes the data management life cycle for the datasets to be
collected, analysed, processed and produced by the HarmonicSS project. It
presents how data will be handled during the project, as well as how and what
parts of the data sets will be made available following the project’s
completion.
In an overall view, Section 3 presents the HarmonicSS data, which include the
HarmonicSS Integrated Cohort and the results of the HarmonicSS scenarios. In
Section 4 the data collection process and the individual datasets to be
collected for the purposes of the HarmonicSS project are presented, with
specific details for each dataset, including the data provider, its current
purpose, the number of unique patients and records it includes, its potential
of expanding through the inclusion of new data of old patients and/or newly
generated data for new patients, among others. Section 5 presents the metadata
to be generated for the HarmonicSS data, the accompanying documentation to be
prepared and the curation processes applied. In Section 6 the storage and back
up mechanisms and operations are presented, with a particular focus on the
Cloud infrastructure set for the purposes of the HarmonicSS operations and the
data security mechanisms to be applied, based on the close interaction with
Task 4.3 “Set up of the secure cloud infrastructure, repositories and
platform”. The legal and ethical compliance of the data management operations
are described in Section 7, based on the works and results of Task 3.2
“Ethical, legal, privacy and IPR requirements for data sharing and clinical
trials” and the continuous developments of the project. Finally, Section 8
presents data preservation, sharing and access based on the combined results
of the aforementioned Tasks 3.2 and 4.3, as well as Task 2.3 “Sustainability
and expandability” and Task 5.1 “Cohort data governance and sharing services”.
“Specific responsibilities for data management” are presented in Section
9\.
It should be noted that although HarmonicSS is not part of the Open Research
Data Pilot (ORD pilot), a Data Management Plan was considered as a useful
guide for the project’s data related operations and, thus, included in the
workplan. The report has followed the “Guidelines on FAIR Data Management in
Horizon 2020” as published by the European Commission [1]. Given that most of
the related activities and tasks are ongoing and have not yet produced their
final results, the data management plan of the project may be revised in the
future.
# The HarmonicSS Data
## The HarmonicSS Integrated Cohort
One of the main achievements targeted at by the HarmonicSS project is to
interlink 23 highly heterogeneous local, regional, national, international and
clinical trial cohorts of patients with Primary Sjögren’s Syndrome (pSS) from
22 entities across Europe and the USA into **one Integrated Cohort** . In
these cohorts heterogeneity is met at almost all levels, posing significant
challenges towards their integration. This data include qualitative,
quantitative, clinical measurements, medical records, administrative
information, ultrasound images, biopsy results, among others. Section 4
presents the different datasets to be integrated.
The HarmonicSS Integrated Cohort is based on a highly detailed _Reference
Model_ which will be built with Semantic Web technologies based on:
1. the analysis of the clinical partners in terms of the minimum criteria for data inclusion and quality of a cohort (Task 4.1) and
2. parameters which are met across the datasets and are not included in (a).
The HarmonicSS Reference Model will include a wide range of well-structured
parameters, properly annotated with information about their purpose, applied
methods, value range and units, based on the results of Task 4.1 as presented
in D4.2 “Guidelines and clinical preparatory actions for completing individual
cohorts”. In order for the HarmonicSS Reference Model to be interoperable and
extendable, it will be based on and aligned with international standards and
widely used domain models, vocabularies and classifications. The list of
minimum criteria to be included in the cohort for data inclusion and quality
purposes is presented in Annex 1.
In the end, the HarmonicSS Integrated Cohort will be the outcome of the
integration of the cohorts provided by the 22 data providers of the
Consortium, which will follow international standards both in terms of
structure and values (i.e., vocabularies, classifications and coding systems).
## Scenarios’ Results
The approaches followed and the findings from the implementation of the
HarmonicSS scenarios, including potentially new knowledge, will also be part
of the HarmonicSS data. More specifically, Table 1 presents the expected
results by each scenario-related task of the project.
<table>
<tr>
<th>
**Scenario**
**Result**
**Dataset ID**
</th>
<th>
**Scenario Title**
</th>
<th>
**Expected Results**
</th>
<th>
**Task**
</th> </tr>
<tr>
<td>
HSS_SR_01
</td>
<td>
Stratification model for
classifying pSS patients
</td>
<td>
A stratification model for:
1. the early identification of high risk individuals in clinical practice by classifying pSS patients according to laboratory predictors, biomarkers and clinical phenotypes,
2. the estimation of the risk of subsequent organ involvement and disease course and comorbidities (e.g. atherosclerosis, osteoporosis,
</td>
<td>
Task 6.1
</td> </tr>
<tr>
<td>
**Scenario**
**Result**
**Dataset ID**
</td>
<td>
**Scenario Title**
</td>
<td>
**Expected Results**
</td>
<td>
**Task**
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
psychopathological issues) in different subgroups of pSS patients and
(iii) the prescription of specific treatments to different subgroups.
</td>
<td>
</td> </tr>
<tr>
<td>
HSS_SR_02
</td>
<td>
Validation of existing
biomarkers and
discovery of novel ones
</td>
<td>
* Validated clinical laboratory and histopathological predictors as well as autoantibodies, and genetic variants for early pSS diagnosis, lymphoma development and response to therapy.
* Novel molecular and genetic biomarkers for early pSS diagnosis using composite indices, lymphoma development and response to therapy, that can be used in the future for targeted therapies.
</td>
<td>
Task 6.2
</td> </tr>
<tr>
<td>
HSS_SR_03
</td>
<td>
Models of lymphomagenesis
</td>
<td>
A model for lymphomagenesis development in pSS patients capturing the
important and informative features that lead to lymphomagenesis.
</td>
<td>
Task 6.3
</td> </tr>
<tr>
<td>
HSS_SR_04
</td>
<td>
Formulation of common clinical practices,
guidelines and rules
</td>
<td>
* Common European guidelines for the management of pSS patients.
* Rules for storing blood, tissue, saliva, serum, cells, DNA, RNA and biopsies samples in biobanks for pSS.
</td>
<td>
Task 6.4
</td> </tr>
<tr>
<td>
HSS_SR_05
</td>
<td>
pSS health policy
development
</td>
<td>
Preliminary estimations of the impact of a health policy through measurement
of changes in collected measures considering also the context characteristics.
</td>
<td>
Task 8.1-8.3
</td> </tr> </table>
_Table 1 The HarmonicSS Scenarios Results_
The scenarios’ results will be published at the project’s website as soon as
they are available based on the project’s timeplan (i.e. HSS_SR_01 to
HSS_SR_04 on PM36 and HSS_SR_05 on PM42) in an .xml format and annotated with
the metadata described in Section 5.
# Data Collection
## Description of Data to be collected
As previously mentioned, HarmonicSS brings together and will eventually
interlink datasets from _22 entities_ _across Europe and the USA_ . The
datasets to be integrated into the HarmonicSS integrated cohort, being _23_ in
total (as one partner – UPSud – will provide 2 datasets) and coming from _11
different countries_ (being Germany, Italy, Belgium, UK, Spain, Norway,
Sweden, Greece, France, the Netherlands, USA), include pSS-related data for
more than _11680 patients (expected to reach up to 12500 patients by the end
of the project)_ . In terms of content this data cover demographics,
diagnoses, results of ocular and oral tests, biopsies results, ultrasounds,
laboratory test results, clinical assessments, previous and current therapies
and genetic data. Among the datasets to be collected and integrated, 6 are
stored in an SQL database (including 1 in PostgreSQL), 1 in OpenClinica and 16
are in excel format.
The next subsections present each dataset to be collected in terms of its
current purpose, number of unique patients and records (with one patient
having one or more records), time period it covers, whether it includes
longitudinal data, the possibility to be expanded by including either new data
from old patients or data from new patients, the language the data are into,
past integration efforts, quality assurance processes applied and
pseudonymisation status. It should also be noted that each dataset is assigned
with a dataset ID in the form of HSS_CD_[NUMBERING], with HSS standing for
HarmonicSS, CD for Collected Dataset.
### Datasets from UoA
<table>
<tr>
<th>
**Dataset ID**
</th>
<th>
HSS_CD_01
</th> </tr>
<tr>
<td>
**Partner Name**
</td>
<td>
UoA
</td> </tr>
<tr>
<td>
What is the current **purpose** of your data? How has it been **used** so far?
</td>
<td>
To systematically collect clinical and laboratory information on patients with
primary Sjogren’s syndrome and associate them with genetic, biochemical and
histopathological biomarkers.
</td> </tr>
<tr>
<td>
Does your cohort involve **longitudinal** data?
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
Have you ever integrated your cohort with other cohorts in the past?
</td>
<td>
No
</td> </tr>
<tr>
<td>
How **many unique patients** do your datasets involve?
</td>
<td>
600 patients
</td> </tr>
<tr>
<td>
How many **different records** do your datasets include?
</td>
<td>
600 records
</td> </tr>
<tr>
<td>
What is the **time period** the data cover?
</td>
<td>
From 01/01/1980 to date
</td> </tr>
<tr>
<td>
Are there any **new data,** which will be collected during the project’s
lifetime, expected to be included in
</td>
<td>
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
If yes, how **frequently** do you expect to provide **new** data to the
HarmonicSS integrated cohort?
</td>
<td>
Once a year
</td> </tr>
<tr>
<td>
the HarmonicSS integrated cohort?
</td>
<td>
If yes, do the new data refer to **old** patients or also **new** ones?
</td>
<td>
Old and new
</td> </tr>
<tr>
<td>
If planning to include new patients, could you provide an estimation for their
number?
</td>
<td>
300
</td> </tr>
<tr>
<td>
Is your dataset in English?
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
Is the dataset already pseudonymised?
</td>
<td>
No
</td> </tr>
<tr>
<td>
Have you performed any quality checks on your dataset?
</td>
<td>
Yes
</td> </tr> </table>
_Table 2 The UoA dataset description_
### Datasets from AOUD
<table>
<tr>
<th>
**Dataset ID**
</th>
<th>
HSS_CD_02
</th> </tr>
<tr>
<td>
**Partner Name**
</td>
<td>
AOUD
</td> </tr>
<tr>
<td>
What is the current **purpose** of your data? How has it been **used** so far?
</td>
<td>
The Udine cohort of Sjögren’s syndrome was created in 2010 in order to collect
data, blood and tissue samples in the prospect of performing clinic research
with the main purpose of identify new biomarkers and make a stratification of
SS patients mainly according to their different risk of lymphoma development.
The nature of the cohort is local but we share our data in a large multicenter
Italian cohort of SS patients with the other Centers of the Italian Group of
Study of Sjögren’s syndrome (GRISS).
</td> </tr>
<tr>
<td>
Does your cohort involve **longitudinal** data?
</td>
<td>
No
</td> </tr>
<tr>
<td>
Have you ever integrated your cohort with other cohorts in the past?
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
How **many unique patients** do your datasets involve?
</td>
<td>
275 patients
</td> </tr>
<tr>
<td>
How many **different records** do your datasets include?
</td>
<td>
275 records
</td> </tr>
<tr>
<td>
What is the **time period** the data cover?
</td>
<td>
From 2010 to 2017
</td> </tr>
<tr>
<td>
Are there any **new data,** which will be collected during the project’s
lifetime, expected
</td>
<td>
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
If yes, how **frequently** do you expect to provide **new** data to the
HarmonicSS integrated cohort?
</td>
<td>
Every 6 months
</td> </tr>
<tr>
<td>
to be included in the HarmonicSS integrated cohort?
</td>
<td>
If yes, do the new data refer to **old** patients or also **new** ones?
</td>
<td>
Both
</td> </tr>
<tr>
<td>
If planning to include new patients, could you provide an estimation for their
number?
</td>
<td>
30-40/year
</td> </tr>
<tr>
<td>
Is your dataset in English?
</td>
<td>
Yes but with also Italian
</td> </tr>
<tr>
<td>
Is the dataset already pseudonymised?
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
Have you performed any quality checks on your dataset?
</td>
<td>
It has been already utilized for clinical studies
</td> </tr> </table>
_Table 3 The AOUD dataset description_
### Datasets from UNIPI
<table>
<tr>
<th>
**Dataset ID**
</th>
<th>
HSS_CD_03
</th> </tr>
<tr>
<td>
**Partner Name**
</td>
<td>
UNIPI
</td> </tr>
<tr>
<td>
What is the current **purpose** of your data? How has it been **used** so far?
</td>
<td>
Clinical Research
</td> </tr>
<tr>
<td>
Does your cohort involve **longitudinal** data?
</td>
<td>
Yes for specific sub-studies
</td> </tr>
<tr>
<td>
Have you ever integrated your cohort with other cohorts in the past?
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
How **many unique patients** do your datasets involve?
</td>
<td>
377 patients
</td> </tr>
<tr>
<td>
How many **different records** do your datasets include?
</td>
<td>
377 records
</td> </tr>
<tr>
<td>
What is the **time period** the data cover?
</td>
<td>
From 1999 to 2017
</td> </tr>
<tr>
<td>
Are there any **new data,** which will be collected during the project’s
lifetime, expected to be included in the HarmonicSS integrated cohort?
</td>
<td>
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
If yes, how **frequently** do you expect to provide **new** data to the
HarmonicSS integrated cohort?
</td>
<td>
Every 6 months
</td> </tr>
<tr>
<td>
If yes, do the new data refer to **old** patients or also **new** ones?
</td>
<td>
Also new ones
</td> </tr>
<tr>
<td>
If planning to include new patients, could you provide an estimation for their
number?
</td>
<td>
50
</td> </tr>
<tr>
<td>
Is your dataset in English?
</td>
<td>
No
</td> </tr>
<tr>
<td>
Is the dataset already pseudonymised?
</td>
<td>
No
</td> </tr>
<tr>
<td>
Have you performed any quality checks on your dataset?
</td>
<td>
No
</td> </tr> </table>
_Table 4 The UNIPI dataset description_
### Datasets from UNIPG
<table>
<tr>
<th>
**Dataset ID**
</th>
<th>
HSS_CD_04
</th> </tr>
<tr>
<td>
**Partner Name**
</td>
<td>
UNIPG
</td> </tr>
<tr>
<td>
What is the current **purpose** of your data? How has it been **used** so far?
</td>
<td>
To collect clinical and serologic data of primary Sjogren’s syndrome patients
followed in our Rheumatology Unit.
</td> </tr>
<tr>
<td>
Does your cohort involve **longitudinal** data?
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
Have you ever integrated your cohort with other cohorts in the past?
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
How **many unique patients** do your datasets involve?
</td>
<td>
154 patients
</td> </tr>
<tr>
<td>
How many **different records** do your datasets include?
</td>
<td>
154 records
</td> </tr>
<tr>
<td>
What is the **time period** the data cover?
</td>
<td>
From 2010 to 2017
</td> </tr>
<tr>
<td>
Are there any **new data,** which will be collected during the project’s
lifetime, expected to be included in the HarmonicSS integrated cohort?
</td>
<td>
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
If yes, how **frequently** do you expect to provide **new** data to the
HarmonicSS integrated cohort?
</td>
<td>
Every 6 months
</td> </tr>
<tr>
<td>
If yes, do the new data refer to **old** patients or also **new** ones?
</td>
<td>
Yes, also new ones
</td> </tr>
<tr>
<td>
If planning to include new patients, could you provide an estimation for their
number?
</td>
<td>
8-10 patients/year
</td> </tr>
<tr>
<td>
Is your dataset in English?
</td>
<td>
No
</td> </tr>
<tr>
<td>
Is the dataset already pseudonymised?
</td>
<td>
No
</td> </tr>
<tr>
<td>
Have you performed any quality checks on your dataset?
</td>
<td>
No
</td> </tr> </table>
_Table 5 The UNIPG dataset description_
### Datasets from UNIVAQ
<table>
<tr>
<th>
**Dataset ID**
</th>
<th>
HSS_CD_05
</th> </tr>
<tr>
<td>
**Partner Name**
</td>
<td>
UNIVAQ
</td> </tr>
<tr>
<td>
What is the current **purpose** of your data? How has it been **used** so far?
</td>
<td>
* To assess both the diagnostic and prognostic value of MSG histopatology in stratifying different clinical subsets of disease.
* To correlate the clinical,
</td> </tr>
<tr>
<td>
</td>
<td>
serological ad immunological spectrum of pSS patients with the histological
and molecular findings in labial minor salivary glands.
To assess efficacy and safety of Rituximab in early primary Sjögren’s
syndrome.
</td> </tr>
<tr>
<td>
Does your cohort involve **longitudinal** data?
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
Have you ever integrated your cohort with other cohorts in the past?
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
How **many unique patients** do your datasets involve?
</td>
<td>
111 patients
</td> </tr>
<tr>
<td>
How many **different records** do your datasets include?
</td>
<td>
111 records
</td> </tr>
<tr>
<td>
What is the **time period** the data cover?
</td>
<td>
From 2010 to 2017
</td> </tr>
<tr>
<td>
Are there any **new data,** which will be collected during the project’s
lifetime, expected to be included in the HarmonicSS integrated cohort?
</td>
<td>
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
If yes, how **frequently** do you expect to provide **new** data to the
HarmonicSS integrated cohort?
</td>
<td>
2 times/year
</td> </tr>
<tr>
<td>
If yes, do the new data refer to **old** patients or also **new** ones?
</td>
<td>
Yes (old and new patients)
</td> </tr>
<tr>
<td>
If planning to include new patients, could you provide an estimation for their
number?
</td>
<td>
100
</td> </tr>
<tr>
<td>
Is your dataset in English?
</td>
<td>
No
</td> </tr>
<tr>
<td>
Is the dataset already pseudonymised?
</td>
<td>
No
</td> </tr>
<tr>
<td>
Have you performed any quality checks on your dataset?
</td>
<td>
No
</td> </tr> </table>
_Table 6 The UNIVAQ dataset description_
### Datasets from UNIRO
<table>
<tr>
<th>
**Dataset ID**
</th>
<th>
HSS_CD_06
</th> </tr>
<tr>
<td>
**Partner Name**
</td>
<td>
UNIRO
</td> </tr>
<tr>
<td>
What is the current **purpose** of your data? How has it been **used** so far?
</td>
<td>
Research purposes. Some data have been used for collaborative works (Italian
study group for SS).
</td> </tr>
<tr>
<td>
How **many unique patients** do your datasets involve?
</td>
<td>
514
</td> </tr>
<tr>
<td>
How many **different records** do your datasets include?
</td>
<td>
514
</td> </tr>
<tr>
<td>
What is the **time period** the data cover?
</td>
<td>
From 1995 to 2017 (actually only 1 is from 1995, the majority from 2005)
</td> </tr>
<tr>
<td>
Are there any **new data,** which will be collected during the project’s
lifetime, expected to be included in the HarmonicSS integrated cohort?
</td>
<td>
</td>
<td>
Yes - histological data with evaluation of focus score, mean foci area (mm2),
% of MSG infiltration, segregated foci (yes or no), percentage of segregated
foci, presence of germinal centre (GC)(yes or no), % of GC, presence of LESA
(yes or no).
</td> </tr>
<tr>
<td>
If yes, how **frequently** do you expect to provide **new** data to the
HarmonicSS integrated cohort?
</td>
<td>
Whenever necessary – estimated to be every 3 months
</td> </tr>
<tr>
<td>
If yes, do the new data refer to **old** patients or also **new** ones?
</td>
<td>
They will refer to all the new patients and also part of the old patients, if
biopsies are available
(till now around 50 patients)
</td> </tr>
<tr>
<td>
If planning to include new patients, could you provide an estimation for their
number?
</td>
<td>
At least 50 new per year
</td> </tr>
<tr>
<td>
Is your dataset in English?
</td>
<td>
Originally in Italian but has been translated to English
</td> </tr>
<tr>
<td>
Is the dataset already pseudonymised?
</td>
<td>
Not yet
</td> </tr>
<tr>
<td>
Have you performed any quality checks on your dataset?
</td>
<td>
No
</td> </tr> </table>
_Table 7 The UNIRO dataset description_
### Datasets from MHH
<table>
<tr>
<th>
**Dataset ID**
</th>
<th>
HSS_CD_07
</th> </tr>
<tr>
<td>
**Partner Name**
</td>
<td>
MHH
</td> </tr>
<tr>
<td>
What is the current **purpose** of your data? How has it been **used** so far?
</td>
<td>
Studying associations of autoantiobodies with CTDs and clinical features of
pSS.
</td> </tr>
<tr>
<td>
Does your cohort involve **longitudinal** data?
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
Have you ever integrated your cohort with other cohorts in the past?
</td>
<td>
No
</td> </tr>
<tr>
<td>
How **many unique patients** do your datasets involve?
</td>
<td>
255 patients
</td> </tr>
<tr>
<td>
How many **different records** do your datasets include?
</td>
<td>
2000 records
</td> </tr>
<tr>
<td>
What is the **time period** the data cover?
</td>
<td>
From 2012 to 2017
</td> </tr>
<tr>
<td>
Are there any **new data,** which will be collected during the project’s
</td>
<td>
</td>
<td>
No
</td> </tr>
<tr>
<td>
If yes, how **frequently** do you expect to provide **new** data to the
HarmonicSS integrated
</td>
<td>
\-
</td> </tr>
<tr>
<td>
lifetime, expected to be included in the HarmonicSS integrated cohort?
</td>
<td>
cohort?
</td>
<td>
</td> </tr>
<tr>
<td>
If yes, do the new data refer to **old** patients or also **new** ones?
</td>
<td>
\-
</td> </tr>
<tr>
<td>
If planning to include new patients, could you provide an estimation for their
number?
</td>
<td>
\-
</td> </tr>
<tr>
<td>
Is your dataset in English?
</td>
<td>
No
</td> </tr>
<tr>
<td>
Is the dataset already pseudonymised?
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
Have you performed any quality checks on your dataset?
</td>
<td>
Yes
</td> </tr> </table>
_Table 8 The MHH dataset description_
### Datasets from CUMB
<table>
<tr>
<th>
**Dataset ID**
</th>
<th>
HSS_CD_08
</th> </tr>
<tr>
<td>
**Partner Name**
</td>
<td>
CUMB
</td> </tr>
<tr>
<td>
What is the current **purpose** of your data? How has it been **used** so far?
</td>
<td>
Data bank to identify patients with Sjögren’s to participate in pre-clinical
and clinical studies.
</td> </tr>
<tr>
<td>
Does your cohort involve **longitudinal** data?
</td>
<td>
No
</td> </tr>
<tr>
<td>
Have you ever integrated your cohort with other cohorts in the past?
</td>
<td>
Partially
</td> </tr>
<tr>
<td>
How **many unique patients** do your datasets involve?
</td>
<td>
~ 180 patients
</td> </tr>
<tr>
<td>
How many **different records** do your datasets include?
</td>
<td>
~180 records
</td> </tr>
<tr>
<td>
What is the **time period** the data cover?
</td>
<td>
From around 2010 onwards
</td> </tr>
<tr>
<td>
Are there any **new data,** which will be collected during the project’s
lifetime, expected to be included in the HarmonicSS integrated cohort?
</td>
<td>
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
If yes, how **frequently** do you expect to provide **new** data to the
HarmonicSS integrated cohort?
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
If yes, do the new data refer to **old** patients or also **new** ones?
</td>
<td>
Both
</td> </tr>
<tr>
<td>
If planning to include new patients, could you provide an estimation for their
number?
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
Is your dataset in English?
</td>
<td>
No
</td> </tr>
<tr>
<td>
Is the dataset already pseudonymised?
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
Have you performed any quality checks on your
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
dataset?
</td>
<td>
</td> </tr> </table>
_Table 9 The CUMB dataset description_
### Datasets from UoB
<table>
<tr>
<th>
**Dataset ID**
</th>
<th>
HSS_CD_09
</th> </tr>
<tr>
<td>
**Partner Name**
</td>
<td>
UoB
</td> </tr>
<tr>
<td>
**Dataset Name**
</td>
<td>
OASIS
</td> </tr>
<tr>
<td>
What is the current **purpose** of your data? How has it been **used** so far?
</td>
<td>
(i) To develop blood, tissue and imaging biomarkers predictive of disease
activity and complications in pSS, including lymphoma development, (ii)
peripheral blood and imaging correlates of the key diagnostic and prognostic
features present in salivary gland biopsies (iii) identify risk factors for
pSS.
</td> </tr>
<tr>
<td>
Does your cohort involve **longitudinal** data?
</td>
<td>
Not at present.
</td> </tr>
<tr>
<td>
Have you ever integrated your cohort with other cohorts in the past?
</td>
<td>
No
</td> </tr>
<tr>
<td>
How **many unique patients** do your datasets involve?
</td>
<td>
81 pSS patients meeting AECG criteria, 13 pSS not meeting AECG criteria, 9
secondary Sjögren’s and
70 sicca syndrome
</td> </tr>
<tr>
<td>
How many **different records** do your datasets include?
</td>
<td>
173 records
</td> </tr>
<tr>
<td>
What is the **time period** the data cover?
</td>
<td>
From 04/2014 to 04/2017
</td> </tr>
<tr>
<td>
Are there any **new data,** which will be collected during the project’s
lifetime, expected to be included in the HarmonicSS integrated cohort?
</td>
<td>
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
If yes, how **frequently** do you expect to provide **new** data to the
HarmonicSS integrated cohort?
</td>
<td>
\-
</td> </tr>
<tr>
<td>
If yes, do the new data refer to **old** patients or also **new** ones?
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
If planning to include new patients, could you provide an estimation for their
number?
</td>
<td>
\-
</td> </tr>
<tr>
<td>
Is your dataset in English?
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
Is the dataset already pseudonymised?
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
Have you performed any quality checks on your dataset?
</td>
<td>
Yes
</td> </tr> </table>
_Table 10 The UoB dataset description_
### Datasets from QMUL
<table>
<tr>
<th>
**Dataset ID**
</th>
<th>
HSS_CD_10
</th> </tr>
<tr>
<td>
**Partner Name**
</td>
<td>
QMUL
</td> </tr>
<tr>
<td>
**Dataset Name**
</td>
<td>
Sjogren EMR Biobank
</td> </tr>
<tr>
<td>
What is the current **purpose** of your data? How has it been **used** so far?
</td>
<td>
Correlation of salivary gland biopsies, peripheral blood immunophenotype and
circulating inflammatory biomarkers with clinical parameters of disease
activity.
</td> </tr>
<tr>
<td>
How **many unique patients** do your datasets involve?
</td>
<td>
300 patients
</td> </tr>
<tr>
<td>
How many **different records** do your datasets include?
</td>
<td>
300 records
</td> </tr>
<tr>
<td>
What is the **time period** the data cover?
</td>
<td>
From 2010 to 2017
</td> </tr>
<tr>
<td>
Are there any **new data,** which will be collected during the project’s
lifetime, expected to be included in the HarmonicSS integrated cohort?
</td>
<td>
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
If yes, how **frequently** do you expect to provide **new** data to the
HarmonicSS integrated cohort?
</td>
<td>
Every 6 months
</td> </tr>
<tr>
<td>
If yes, do the new data refer to **old** patients or also **new** ones?
</td>
<td>
Potentially both
</td> </tr>
<tr>
<td>
If planning to include new patients, could you provide an estimation for their
number?
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
Is your dataset in English?
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
Is the dataset already pseudonymised?
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
Have you performed any quality checks on your dataset?
</td>
<td>
Yes
</td> </tr> </table>
_Table 11 The QMUL dataset description_
### Datasets from UNEW
<table>
<tr>
<th>
**Dataset ID**
</th>
<th>
HSS_CD_11
</th> </tr>
<tr>
<td>
**Partner Name**
</td>
<td>
UNEW
</td> </tr>
<tr>
<td>
**Dataset Name**
</td>
<td>
UK primary Sjogren’s syndrome registry
</td> </tr>
<tr>
<td>
What is the current **purpose** of your data? How has it been **used** so far?
</td>
<td>
For research and it has been used to support various research projects led by
Newcastle as well as collaborators.
</td> </tr>
<tr>
<td>
Does your cohort involve **longitudinal** data?
</td>
<td>
Yes (a subset)
</td> </tr>
<tr>
<td>
Have you ever integrated your cohort with other cohorts in the past?
</td>
<td>
No
</td> </tr>
<tr>
<td>
How **many unique patients** do your datasets involve?
</td>
<td>
850 patients
</td> </tr>
<tr>
<td>
How many **different records** do your datasets include?
</td>
<td>
1100 records
</td> </tr>
<tr>
<td>
What is the **time period** the data cover?
</td>
<td>
From 2009 to 2017
</td> </tr>
<tr>
<td>
Are there any **new data,** which will be collected during the project’s
lifetime, expected to be included in the HarmonicSS integrated cohort?
</td>
<td>
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
If yes, how **frequently** do you expect to provide **new** data to the
HarmonicSS integrated cohort?
</td>
<td>
Annually
</td> </tr>
<tr>
<td>
If yes, do the new data refer to **old** patients or also **new** ones?
</td>
<td>
Both
</td> </tr>
<tr>
<td>
If planning to include new patients, could you provide an estimation for their
number?
</td>
<td>
100
</td> </tr>
<tr>
<td>
Is your dataset in English?
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
Is the dataset already pseudonymised?
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
Have you performed any quality checks on your dataset?
</td>
<td>
Yes
</td> </tr> </table>
_Table 12 The UNEW dataset description_
### Datasets from IDIBAPS
<table>
<tr>
<th>
**Dataset ID**
</th>
<th>
HSS_CD_12
</th> </tr>
<tr>
<td>
**Partner Name**
</td>
<td>
IDIBAPS
</td> </tr>
<tr>
<td>
What is the current **purpose** of your data? How has it been **used** so far?
</td>
<td>
Retrospective clinical description
</td> </tr>
<tr>
<td>
Does your cohort involve **longitudinal** data?
</td>
<td>
No
</td> </tr>
<tr>
<td>
Have you ever integrated your cohort with other cohorts in the past?
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
How **many unique patients** do your datasets involve?
</td>
<td>
580 patients
</td> </tr>
<tr>
<td>
How many **different records** do your datasets include?
</td>
<td>
580 records
</td> </tr>
<tr>
<td>
What is the **time period** the data cover?
</td>
<td>
From 1992 to 2017
</td> </tr>
<tr>
<td>
Are there any **new data,** which will be collected during the project’s
lifetime, expected to be included in
</td>
<td>
</td>
<td>
No
</td> </tr>
<tr>
<td>
If yes, how **frequently** do you expect to provide **new** data to the
HarmonicSS integrated cohort?
</td>
<td>
\-
</td> </tr>
<tr>
<td>
the HarmonicSS integrated cohort?
</td>
<td>
If yes, do the new data refer to **old** patients or also **new** ones?
</td>
<td>
\-
</td> </tr>
<tr>
<td>
If planning to include new patients, could you provide an estimation for their
number?
</td>
<td>
\-
</td> </tr>
<tr>
<td>
Is your dataset in English?
</td>
<td>
No
</td> </tr>
<tr>
<td>
Is the dataset already pseudonymised?
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
Have you performed any quality checks on your dataset?
</td>
<td>
No
</td> </tr> </table>
_Table 13 The IDIBAPS dataset description_
### Datasets from UBO
<table>
<tr>
<th>
**Dataset ID**
</th>
<th>
HSS_CD_13
</th> </tr>
<tr>
<td>
**Dataset Name**
</td>
<td>
DIApSS cohort
</td> </tr>
<tr>
<td>
What is the current **purpose** of your data? How has it been **used** so far?
</td>
<td>
To study the diagnostic performance of the various existing classification
criteria at inclusion, to study the diagnostic performance of new tests used
in clinical practice, such as salivary gland ultrasound or various biological
tests, new histological methods..., to evaluate during the follow-up the
evolution of the disease and the results of the different tests, to correlate
the different tests to the prognosis of the patient (5 years) and to study the
evolution of the disease activity and quality of life of the patient.
</td> </tr>
<tr>
<td>
Does your cohort involve **longitudinal** data?
</td>
<td>
Yes but not for all the patients
</td> </tr>
<tr>
<td>
Have you ever integrated your cohort with other cohorts in the past?
</td>
<td>
Yes (ASSESS cohort, Precisesads)
</td> </tr>
<tr>
<td>
How **many unique patients** do your datasets involve?
</td>
<td>
100
</td> </tr>
<tr>
<td>
How many **different records** do your datasets include?
</td>
<td>
100
</td> </tr>
<tr>
<td>
What is the **time period** the data cover?
</td>
<td>
From 2006 to 2017
</td> </tr>
<tr>
<td>
Are there any **new data,** which will be collected during the project’s
lifetime, expected to be included in
</td>
<td>
</td>
<td>
No
</td> </tr>
<tr>
<td>
If yes, how **frequently** do you expect to provide **new** data to the
HarmonicSS integrated cohort?
</td>
<td>
\-
</td> </tr>
<tr>
<td>
the HarmonicSS integrated cohort?
</td>
<td>
If yes, do the new data refer to **old** patients or also **new** ones?
</td>
<td>
\-
</td> </tr>
<tr>
<td>
If planning to include new patients, could you provide an estimation for their
number?
</td>
<td>
\-
</td> </tr>
<tr>
<td>
Is your dataset in English?
</td>
<td>
No but easy to translate
</td> </tr>
<tr>
<td>
Is the dataset already pseudonymised?
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
Have you performed any quality checks on your dataset?
</td>
<td>
No
</td> </tr> </table>
_Table 14 The UBO dataset description_
### Datasets from UPSud
<table>
<tr>
<th>
**Dataset ID**
</th>
<th>
HSS_CD_14
</th> </tr>
<tr>
<td>
**Partner Name**
</td>
<td>
UPSud
</td> </tr>
<tr>
<td>
**Dataset Name**
</td>
<td>
Paris Sud Cohort
</td> </tr>
<tr>
<td>
What is the current **purpose** of your data? How has it been **used** so far?
</td>
<td>
Our data have been used:
* for translational studies we made
* for the establishment of new ACR/EULAR criteria of the disease.
</td> </tr>
<tr>
<td>
Does your cohort involve **longitudinal** data?
</td>
<td>
No
</td> </tr>
<tr>
<td>
Have you ever integrated your cohort with other cohorts in the past?
</td>
<td>
Yes with the ORMF and SICCA for establishing ACR/EULAR criteria
</td> </tr>
<tr>
<td>
How **many unique patients** do your datasets involve?
</td>
<td>
519 patients
</td> </tr>
<tr>
<td>
How many **different records** do your datasets include?
</td>
<td>
519 records
</td> </tr>
<tr>
<td>
What is the **time period** the data cover?
</td>
<td>
From 2001 to 2016
</td> </tr>
<tr>
<td>
Are there any **new data,** which will be collected during the project’s
lifetime, expected to be included in the HarmonicSS integrated cohort?
</td>
<td>
</td>
<td>
Yes, but only if lymphoma development occurs
</td> </tr>
<tr>
<td>
If yes, how **frequently** do you expect to provide **new** data to the
HarmonicSS integrated cohort?
</td>
<td>
Every year
</td> </tr>
<tr>
<td>
If yes, do the new data refer to **old** patients or also **new** ones?
</td>
<td>
Old patients
</td> </tr>
<tr>
<td>
If planning to include new patients, could you provide an estimation for their
number?
</td>
<td>
\-
</td> </tr>
<tr>
<td>
Is your dataset in English?
</td>
<td>
No
</td> </tr>
<tr>
<td>
Is the dataset already pseudonymised?
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
Have you performed any quality checks on your dataset?
</td>
<td>
Yes
When merged with the American data for providing criteria
</td> </tr> </table>
_Table 15 The UPSud (Paris Sud cohort) dataset description_
<table>
<tr>
<th>
**Dataset ID**
</th>
<th>
HSS_CD_15
</th> </tr>
<tr>
<td>
**Partner Name**
</td>
<td>
UPSud
</td> </tr>
<tr>
<td>
**Dataset Name**
</td>
<td>
ASSESS Cohort
</td> </tr>
<tr>
<td>
What is the current **purpose** of your data? How has it been **used** so far?
</td>
<td>
French multicentre cohort
involving 15 centers
12 publications concerning this cohort
</td> </tr>
<tr>
<td>
Does your cohort involve **longitudinal** data?
</td>
<td>
Yes
1 visit every year for 20 years
</td> </tr>
<tr>
<td>
Have you ever integrated your cohort with other cohorts in the past?
</td>
<td>
Yes
We perform common research with Fai Ng (UNEW) involving this cohort integrated
with the UK registry
</td> </tr>
<tr>
<td>
How **many unique patients** do your datasets involve?
</td>
<td>
395 patients
</td> </tr>
<tr>
<td>
How many **different records** do your datasets include?
</td>
<td>
395 records
</td> </tr>
<tr>
<td>
What is the **time period** the data cover?
</td>
<td>
Inclusions From 2006 to 2009
Follow-up which is on-going
</td> </tr>
<tr>
<td>
Are there any **new data,** which will be collected during the project’s
lifetime, expected to be included in the HarmonicSS integrated cohort?
</td>
<td>
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
If yes, how **frequently** do you expect to provide **new** data to the
HarmonicSS integrated cohort?
</td>
<td>
Longitudinal data every year
</td> </tr>
<tr>
<td>
If yes, do the new data refer to **old** patients or also **new** ones?
</td>
<td>
Old patients
</td> </tr>
<tr>
<td>
If planning to include new patients, could you provide an estimation for their
number?
</td>
<td>
\-
</td> </tr>
<tr>
<td>
Is your dataset in English?
</td>
<td>
No
</td> </tr>
<tr>
<td>
Is the dataset already pseudonymised?
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
Have you performed any quality checks on your dataset?
</td>
<td>
Yes
At each translational study made with this cohort
</td> </tr> </table>
_Table 16 The UPSud (ASSESS cohort) dataset description_
### Datasets from UU
<table>
<tr>
<th>
**Dataset ID**
</th>
<th>
HSS_CD_16
</th> </tr>
<tr>
<td>
**Partner Name**
</td>
<td>
UU
</td> </tr>
<tr>
<td>
**Dataset Name**
</td>
<td>
“DISSECT”; Dissecting disease mechanisms in three systemic inflammatory
autoimmune diseases with an interferon signature.
</td> </tr>
<tr>
<td>
What is the current **purpose** of your data? How has it been **used** so far?
</td>
<td>
To investigate genetic susceptibility for pSS through targeted gene sequencing
and genome wide association analyses, to investigate genetic variants
associated with different disease phenotypes, followed by functional analyses.
Clinical data describing female/male differences is an accepted manuscript.
A subset of the data is part of the Sjögren’s syndrome genetics network
“SGENE” collaboration with Kathy Sivils, USA.
</td> </tr>
<tr>
<td>
Does your cohort involve **longitudinal** data?
</td>
<td>
No
</td> </tr>
<tr>
<td>
Have you ever integrated your cohort with other cohorts in the past?
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
How **many unique patients** do your datasets involve?
</td>
<td>
983 patients
</td> </tr>
<tr>
<td>
How many **different records** do your datasets include?
</td>
<td>
983 records
</td> </tr>
<tr>
<td>
What is the **time period** the data cover?
</td>
<td>
From 1956 (first year recorded as symptom onset) to 2017
</td> </tr>
<tr>
<td>
Are there any **new data,** which will be collected during the project’s
lifetime, expected to be included in the HarmonicSS integrated cohort?
</td>
<td>
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
If yes, how **frequently** do you expect to provide **new** data to the
HarmonicSS integrated cohort?
</td>
<td>
Once a year
</td> </tr>
<tr>
<td>
If yes, do the new data refer to **old** patients or also **new** ones?
</td>
<td>
Mostly new patients
</td> </tr>
<tr>
<td>
If planning to include new patients, could you provide an estimation for their
number?
</td>
<td>
25 per year
</td> </tr>
<tr>
<td>
Is your dataset in English?
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
Is the dataset already pseudonymised?
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
Have you performed any quality checks on your dataset?
</td>
<td>
Yes
</td> </tr> </table>
_Table 17 The UU dataset description_
### Datasets from UMCG
<table>
<tr>
<th>
**Dataset ID**
</th>
<th>
HSS_CD_17
</th> </tr>
<tr>
<td>
**Partner Name**
</td>
<td>
UMCG
</td> </tr>
<tr>
<td>
**Dataset Name**
</td>
<td>
RESULT cohort
</td> </tr>
<tr>
<td>
What is the current **purpose** of your data? How has it been **used** so far?
</td>
<td>
Primary: To identify biomarkers and clinical parameters that determine and
predict the longitudinal course of pSS, taking into account patient-reported,
functional, imaging, histopathological, laboratory and genetic data.
Secondary: A) To identify biomarkers and parameters that determine and predict
the progression from early pSS to established pSS. B) To evaluate the effect
of treatment of pSS in routine clinical practice and to identify predictors of
response to treatment. C) To assess the diagnostic ability of salivary gland
ultrasound in pSS and its applicability in monitoring disease activity and
progression.
</td> </tr>
<tr>
<td>
Does your cohort involve **longitudinal** data?
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
Have you ever integrated your cohort with other cohorts in the past?
</td>
<td>
No
</td> </tr>
<tr>
<td>
How **many unique patients** do your datasets involve?
</td>
<td>
Current inclusion: 120 patients
Expected inclusion (total): 500 patients
</td> </tr>
<tr>
<td>
How many **different records** do your datasets include?
</td>
<td>
Per patient: max. 793 variables
Currently: 1-3 time points
Expected (total): max. 26 time points
</td> </tr>
<tr>
<td>
What is the **time period** the data cover?
</td>
<td>
Inclusion: January 2016 to
December 2020
Follow-up: until December 2030
</td> </tr>
<tr>
<td>
Are there any **new data,** which will be collected during the project’s
lifetime, expected to be included in the HarmonicSS integrated cohort?
</td>
<td>
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
If yes, how **frequently** do you expect to provide **new** data to the
HarmonicSS integrated cohort?
</td>
<td>
Up to 26 times per patient
</td> </tr>
<tr>
<td>
If yes, do the new data refer to **old** patients or also **new** ones?
</td>
<td>
Current inclusion: 120 patients
Expected inclusion (total): 500 patients
</td> </tr>
<tr>
<td>
If planning to include new patients, could you provide an estimation for their
number?
</td>
<td>
Expected inclusion (total): 500 patients
</td> </tr>
<tr>
<td>
Is your dataset in English?
</td>
<td>
No
</td> </tr>
<tr>
<td>
Is the dataset already pseudonymised?
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
Have you performed any quality checks on your dataset?
</td>
<td>
Yes
</td> </tr> </table>
_Table 18 The UMCG dataset description_
### Datasets from UMCU
<table>
<tr>
<th>
**Dataset ID**
</th>
<th>
HSS_CD_18
</th> </tr>
<tr>
<td>
**Partner Name**
</td>
<td>
UMCU
</td> </tr>
<tr>
<td>
**Dataset Name**
</td>
<td>
UMCU cohort, U-systems cohort
</td> </tr>
<tr>
<td>
What is the current **purpose** of your data? How has it been **used** so far?
</td>
<td>
Clinical and fundamental research - to find determinants of disease severity
in pSS.
</td> </tr>
<tr>
<td>
Does your cohort involve **longitudinal** data?
</td>
<td>
Yes, partly
</td> </tr>
<tr>
<td>
Have you ever integrated your cohort with other cohorts in the past?
</td>
<td>
Yes
(big data cohort, Ramos Casals et al)
</td> </tr>
<tr>
<td>
How **many unique patients** do your datasets involve?
</td>
<td>
378 patients
</td> </tr>
<tr>
<td>
How many **different records** do your datasets include?
</td>
<td>
783 records
</td> </tr>
<tr>
<td>
What is the **time period** the data cover?
</td>
<td>
From 1996 to 2017
</td> </tr>
<tr>
<td>
Are there any **new data,** which will be collected during the project’s
lifetime, expected to be included in the HarmonicSS integrated cohort?
</td>
<td>
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
If yes, how **frequently** do you expect to provide **new** data to the
HarmonicSS integrated cohort?
</td>
<td>
40 pSS patients/year
</td> </tr>
<tr>
<td>
If yes, do the new data refer to **old** patients or also **new** ones?
</td>
<td>
New ones
And Follow up (old)
</td> </tr>
<tr>
<td>
</td>
<td>
If planning to include new patients, could you provide an estimation for their
number?
</td>
<td>
20 pSS patients/year
</td> </tr>
<tr>
<td>
Is your dataset in English?
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
Is the dataset already pseudonymised?
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
Have you performed any quality checks on your dataset?
</td>
<td>
Yes
</td> </tr> </table>
_Table 19 The UMCU dataset description_
### Datasets from NIVEL
<table>
<tr>
<th>
**Dataset ID**
</th>
<th>
HSS_CD_19
</th> </tr>
<tr>
<td>
**Partner Name**
</td>
<td>
NIVEL
</td> </tr>
<tr>
<td>
**Dataset Name**
</td>
<td>
NIVEL Primary Care Database
</td> </tr>
<tr>
<td>
What is the current **purpose** of your data? How has it been **used** so far?
</td>
<td>
Provide nationally representative information on developments in population
health and health service utilisation and the quality of primary care. Our
data has been used for several hundreds of research projects for secondary
analyses.
</td> </tr>
<tr>
<td>
Does your cohort involve **longitudinal** data?
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
Have you ever integrated your cohort with other cohorts in the past?
</td>
<td>
No
</td> </tr>
<tr>
<td>
How **many unique patients** do your datasets involve?
</td>
<td>
~ 3000
</td> </tr>
<tr>
<td>
How many **different records** do your datasets include?
</td>
<td>
~ 3000
</td> </tr>
<tr>
<td>
What is the **time period** the data cover?
</td>
<td>
We have longitudinal data on this scale since 2010.
</td> </tr>
<tr>
<td>
Are there any **new data,** which will be collected during the project’s
lifetime, expected to be included in the HarmonicSS integrated cohort?
</td>
<td>
</td>
<td>
No
</td> </tr>
<tr>
<td>
If yes, how **frequently** do you expect to provide **new** data to the
HarmonicSS integrated cohort?
</td>
<td>
\-
</td> </tr>
<tr>
<td>
If yes, do the new data refer to **old** patients or also **new** ones?
</td>
<td>
\-
</td> </tr>
<tr>
<td>
If planning to include new patients, could you provide an estimation for their
number?
</td>
<td>
\-
</td> </tr>
<tr>
<td>
Is your dataset in English?
</td>
<td>
No
</td> </tr>
<tr>
<td>
Is the dataset already pseudonymised?
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
Have you performed any quality checks on your dataset?
</td>
<td>
Yes
</td> </tr> </table>
_Table 20 The NIVEL dataset description_
### Datasets from UiB
<table>
<tr>
<th>
**Dataset ID**
</th>
<th>
HSS_CD_20
</th> </tr>
<tr>
<td>
**Partner Name**
</td>
<td>
UiB
</td> </tr>
<tr>
<td>
What is the current **purpose** of your data? How has it been **used** so far?
</td>
<td>
Quality Register for the Department of Rheumatology and Research Group
Immunology/Rheumatology.
Substrate for further analysis/research during the last 20 years.
</td> </tr>
<tr>
<td>
Does your cohort involve **longitudinal** data?
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
Have you ever integrated your cohort with other cohorts in the past?
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
How **many unique patients** do your datasets involve?
</td>
<td>
200 patients within 2 years
</td> </tr>
<tr>
<td>
How many **different records** do your datasets include?
</td>
<td>
200 records
</td> </tr>
<tr>
<td>
What is the **time period** the data cover?
</td>
<td>
01.04.2017 – on going
</td> </tr>
<tr>
<td>
Are there any **new data,** which will be collected during the project’s
lifetime, expected to be included in the HarmonicSS integrated cohort?
</td>
<td>
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
If yes, how **frequently** do you expect to provide **new** data to the
HarmonicSS integrated cohort?
</td>
<td>
\-
</td> </tr>
<tr>
<td>
If yes, do the new data refer to **old** patients or also **new** ones?
</td>
<td>
If yes, data on old patients will be updated and new patients included.
</td> </tr>
<tr>
<td>
If planning to include new patients, could you provide an estimation for their
number?
</td>
<td>
40 new patients within the end of
2017
</td> </tr>
<tr>
<td>
Is your dataset in English?
</td>
<td>
No
</td> </tr>
<tr>
<td>
Is the dataset already pseudonymised?
</td>
<td>
No
</td> </tr>
<tr>
<td>
Have you performed any quality checks on your dataset?
</td>
<td>
No
</td> </tr> </table>
_Table 21 The UiB dataset description_
### Datasets from ULB
<table>
<tr>
<th>
**Dataset ID**
</th>
<th>
HSS_CD_21
</th> </tr>
<tr>
<td>
**Partner Name**
</td>
<td>
ULB
</td> </tr>
<tr>
<td>
What is the current **purpose** of your data? How has it been **used** so far?
</td>
<td>
The cohort of patients suffering from Sjögren’s syndrome was created to
collect data, blood and tissue samples in the prospect of performing
translational research aiming at deciphering the pathogenesis of the disease
and identifying new diagnostic biomarkers.
</td> </tr>
<tr>
<td>
Does your cohort involve **longitudinal** data?
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
Have you ever integrated your cohort with other cohorts in the past?
</td>
<td>
No
</td> </tr>
<tr>
<td>
How **many unique patients** do your datasets involve?
</td>
<td>
210 patients
</td> </tr>
<tr>
<td>
How many **different records** do your datasets include?
</td>
<td>
210 records
</td> </tr>
<tr>
<td>
What is the **time period** the data cover?
</td>
<td>
From 2008 to 2017
</td> </tr>
<tr>
<td>
Are there any **new data,** which will be collected during the project’s
lifetime, expected to be included in the HarmonicSS integrated cohort?
</td>
<td>
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
If yes, how **frequently** do you expect to provide **new** data to the
HarmonicSS integrated cohort?
</td>
<td>
2 times per year
</td> </tr>
<tr>
<td>
If yes, do the new data refer to **old** patients or also **new** ones?
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
If planning to include new patients, could you provide an estimation for their
number?
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
Is your dataset in English?
</td>
<td>
No (could be changed to English)
</td> </tr>
<tr>
<td>
Is the dataset already pseudonymised?
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
Have you performed any quality checks on your dataset?
</td>
<td>
No
</td> </tr> </table>
_Table 22 The ULB dataset description_
### Datasets from OMRF
<table>
<tr>
<th>
**Dataset ID**
</th>
<th>
HSS_CD_22
</th> </tr>
<tr>
<td>
**Partner Name**
</td>
<td>
OMRF
</td> </tr>
<tr>
<td>
What is the current **purpose** of your data? How has it been **used** so far?
</td>
<td>
This resource supports numerous clinical studies for OMRF investigators and
outside collaborators. Studies include gene mapping, transcriptional
</td> </tr>
<tr>
<td>
</td>
<td>
profiling, clinical characterizations and immunological studies.
</td> </tr>
<tr>
<td>
Does your cohort involve **longitudinal** data?
</td>
<td>
No
</td> </tr>
<tr>
<td>
Have you ever integrated your cohort with other cohorts in the past?
</td>
<td>
No
</td> </tr>
<tr>
<td>
How **many unique patients** do your datasets involve?
</td>
<td>
1310 patients
</td> </tr>
<tr>
<td>
How many **different records** do your datasets include?
</td>
<td>
1310 records
</td> </tr>
<tr>
<td>
What is the **time period** the data cover?
</td>
<td>
From Jan 2008- to Dec 2016
</td> </tr>
<tr>
<td>
Are there any **new data,** which will be collected during the project’s
lifetime, expected to be included in the HarmonicSS integrated cohort?
</td>
<td>
</td>
<td>
No
</td> </tr>
<tr>
<td>
If yes, how **frequently** do you expect to provide **new** data to the
HarmonicSS integrated cohort?
</td>
<td>
\-
</td> </tr>
<tr>
<td>
If yes, do the new data refer to **old** patients or also **new** ones?
</td>
<td>
\-
</td> </tr>
<tr>
<td>
If planning to include new patients, could you provide an estimation for their
number?
</td>
<td>
\-
</td> </tr>
<tr>
<td>
Is your dataset in English?
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
Is the dataset already pseudonymised?
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
Have you performed any quality checks on your dataset?
</td>
<td>
Yes
</td> </tr> </table>
_Table 23 The OMRF dataset description_
### Datasets from HAROKOPIO
<table>
<tr>
<th>
**Dataset ID**
</th>
<th>
HSS_CD_23
</th> </tr>
<tr>
<td>
**Partner Name**
</td>
<td>
HAROKOPIO
</td> </tr>
<tr>
<td>
What is the current **purpose** of your data? How has it been **used** so far?
</td>
<td>
The cohort of patients with Sjogren's syndrome includes sequential clinical
and laboratory data as well as tissue samples obtained for diagnostic as well
as research purpose. Part of the data and tissues from this cohort has been
used in previous clinical and translational research.
</td> </tr>
<tr>
<td>
Does your cohort involve **longitudinal** data?
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
Have you ever integrated your cohort with other cohorts in the past?
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
How **many unique patients** do your datasets involve?
</td>
<td>
100 patients
</td> </tr>
<tr>
<td>
How many **different records** do your datasets include?
</td>
<td>
200 records
</td> </tr>
<tr>
<td>
What is the **time period** the data cover?
</td>
<td>
From 1985 to 2017
</td> </tr>
<tr>
<td>
Are there any **new data,** which will be collected during the project’s
lifetime, expected to be included in the HarmonicSS integrated cohort?
</td>
<td>
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
If yes, how **frequently** do you expect to provide **new** data to the
HarmonicSS integrated cohort?
</td>
<td>
Every six months
</td> </tr>
<tr>
<td>
If yes, do the new data refer to **old** patients or also **new** ones?
</td>
<td>
Both old and new
</td> </tr>
<tr>
<td>
If planning to include new patients, could you provide an estimation for their
number?
</td>
<td>
10
</td> </tr>
<tr>
<td>
Is your dataset in English?
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
Is the dataset already pseudonymised?
</td>
<td>
No
</td> </tr>
<tr>
<td>
Have you performed any quality checks on your dataset?
</td>
<td>
Yes
</td> </tr> </table>
_Table 24 The HAROKOPIO dataset description_
## Methodology for Data Collection
The methodology utilised for the data collection process in the HarmonicSS
project towards their successful integration into the HarmonicSS integrated
cohort is described in the next paragraphs. Given that the HarmonicSS
Scenarios’ results constitute data being the output of the project’s
scenarios, the data collection processes focus on the HarmonicSS Integrated
Cohort.
The HarmonicSS Integrated Cohort will include three different sets of data;
1. retrospective data, i.e., data already collected by patients from the clinical partners serving as data providers,
2. new data from old patients of the clinical partners in the form of follow up visits, and (c) new data from new patients of the clinical partners (i.e., patients for whom no data has been collected and thus provided as part of the retrospective data) each of which may require different handling in legal, ethical and technical terms. Nevertheless, the initial steps of the data collection process are common across all different sets of data and are described in the following paragraphs.
As a first step and for the technical partners to make a first assessment of
the datasets variability and required workload for analysis, high-level
descriptive information of each dataset was collected from the data providers
by means of an excel file. The requested information was in regard to:
* the data storage type used (db, file incl. format, legacy system, EHR, etc.),
* the international or local vocabularies used (whether or not they are used, which ones are used, in case of local repositories whether there is a mapping to international ones),
* the total number of records,
* the frequency of updates and
* the number of unique patients.
For the sake of effective data collection and analysis, each technical partner
participating in the analysis and integration activities of the HarmonicSS
cohorts was allocated with a number of datasets (in line with their assigned
effort), which is presented in Table 25.
<table>
<tr>
<th>
**Technical Partner**
</th>
<th>
**Dataset ID**
</th> </tr>
<tr>
<td>
**ICCS/NTUA**
</td>
<td>
HSS_CD_01
</td> </tr>
<tr>
<td>
HSS_CD_03
</td> </tr>
<tr>
<td>
HSS_CD_06
</td> </tr>
<tr>
<td>
HSS_CD_10
</td> </tr>
<tr>
<td>
HSS_CD_16
</td> </tr>
<tr>
<td>
HSS_CD_20
</td> </tr>
<tr>
<td>
HSS_CD_23
</td> </tr>
<tr>
<td>
**UOI**
</td>
<td>
HSS_CD_13
</td> </tr>
<tr>
<td>
HSS_CD_18
</td> </tr>
<tr>
<td>
HSS_CD_19
</td> </tr>
<tr>
<td>
HSS_CD_21
</td> </tr>
<tr>
<td>
**ATOS**
</td>
<td>
HSS_CD_05
</td> </tr>
<tr>
<td>
HSS_CD_11
</td> </tr>
<tr>
<td>
HSS_CD_12
</td> </tr>
<tr>
<td>
HSS_CD_14
</td> </tr>
<tr>
<td>
HSS_CD_15
</td> </tr>
<tr>
<td>
HSS_CD_17
</td> </tr>
<tr>
<td>
**BIOIRC**
</td>
<td>
HSS_CD_02
</td> </tr>
<tr>
<td>
HSS_CD_08
</td> </tr>
<tr>
<td>
**CERTH**
</td>
<td>
HSS_CD_04
</td> </tr>
<tr>
<td>
HSS_CD_07
</td> </tr>
<tr>
<td>
HSS_CD_09
</td> </tr>
<tr>
<td>
**TEIC**
</td>
<td>
HSS_CD_22
</td> </tr> </table>
_Table 25 Assignment of Dataset to Technical Partners_
_All subsequent actions, interactions and communications_ pertaining to the
collection of additional information, the analysis and in depth knowledge
acquisition of the dataset fields, is to be carried out among the
predetermined technical partner and dataset provider pairs. The technical
partners are organized by the WP leader in collaboration with the technical
coordinator. Regular meetings (online and/or physical) are set up in order to
organize the team, observe the status of work, identify any open issues and
risks, and promptly come up with solutions. Any serious problems related to
the cooperation with the data providers (e.g., lack of response or feedback)
is to be ultimately communicated to the scientific coordinator and project
coordinator so that the appropriate actions are performed for the issues to be
promptly resolved.
As a next interaction step, the WP leader created a set of templates for the
elicitation of more detailed information on the datasets. The table (as
presented in Section 4.1) provides information on current data purpose and
present usage, number of unique patients, number of different records, time
period that the data covers, details regarding new data to be collected during
the project’s lifetime, language of dataset, pseudonymisation status, quality
checks, etc.
The data providers were further asked to provide the exact structure of their
dataset, including all of its fields, and a data sample, which should reflect
a pseudonymised record of real patient data. These data were initially
analysed by the technical partners in terms of:
* overall structure and individual parameters
* meaning of fields
* expected field values
* vocabularies, coding systems, classifications for specific fields, such as diagnosis, treatments, demographics (e.g., gender, ethnicity, etc.)
The analysis is enriched through iterative collaborative interaction between
the individual technical and data provider pairs. The objective is to clearly
capture the meaning of each parameter in each dataset and obtain the level of
detail necessary for their prospective alignment. Towards this direction,
cardinality restrictions, thresholds and assumptions or methodology used for
defining the field value, relation among fields, are topics of discussion
within each pair.
In order to facilitate information and data sharing among the technical
partners, a collaborative file sharing repository is being utilized. A
separate folder was created for each of the 22 data providers, and was named
using the scheme [ _NR].[PARTNER_SHORT_NAME]_ with NR standing for the
partners’ number in the project Consortium. Partners work on an offline copy
of their files and regularly update them in the repository, whenever they
consider that substantial changes have been made or an internally set deadline
or milestone approaches. Versioning is taken care of by means of including the
date of last update at the beginning of each filename, in the format _YYMMDD_
\- Year, Month, Day. It should be noted that this repository is used
exclusively for the structure analysis and by no means is any patient data
uploaded and/or shared through this repository.
The collection of clinical data is carried out in accordance with the legal
requirements established in D3.2 "Ethical, legal, privacy and IPR requirements
for data sharing and clinical trials” and Ethics Requirements established in
WP9 “Ethics Requirements”.
Accordingly:
1. retrospective data is collected by the project subject to: the receipt of written approval by the Ethics Committee, pseudonymisation and other conditions set by D.3.2, Section 4.7, Table 3, if applicable;
2. follow-up data of old patients is collected subject to Ethics Approval, pseudonymisation, informed consent of the patient for research covering HarmonicSS (either in the form provided in D.3.2, Annex 4, or in the form used by the data provider, but satisfying the scope of research in HarmonicSS), other conditions set by D.3.2, Section 4.7, Table 3, if applicable;
3. newly collected data from new patients is collected subject to: Ethics Approval, pseudonymisation, informed consent of the patient for research covering HarmonicSS (either in the form provided in D.3.2, Annex 4, or in the form used by the data provider, but satisfying the scope of research in HarmonicSS), other conditions set by D.3.2, Section 4.7, Table 3, if applicable.
All data providers shall provide already pseudonymised data. Each technical
partner will then harmonize explicitly the datasets they are assigned with by
linking them with the HarmonicSS Reference Model, in terms of structure and
vocabularies.
It should be noted that as soon as the HarmonicSS Reference Model is built and
validated, the new data (either follow-up data of old patients or newly
collected data from new patients) will be collected based on this model, and,
hence, this data will be harmonized with the HarmonicSS Integrated Cohort.
Regarding data quality assurance, and completeness in particular, Task 4.1
“Guidelines and clinical preparatory actions for completing individual
cohorts” of the HarmonicSS workplan is specifically dedicated to ensuring the
data quality of the provided datasets, setting the clinical preparatory
actions required for filling in any missing data within their datasets based
on a minimum set of parameters. Work within this task involves the completion
of any missing data from the retrospective data, whereas any new data
collected during the project’s activities are expected to follow the
guidelines set out within Task 4.1 and, thus, be complete.
# Documentation, metadata and curation
The HarmonicSS Integrated Cohort will be accompanied by the detailed
description of the Reference Model, which will be the basis for its
development. This description will include the different parameters of the
model along with their value range, vocabularies and classifications used as
well as their semantic relationships. Moreover, each parameter will be
followed by a clear description of its meaning and purpose as well as any
important information required for its understanding and unambiguous use, such
as method applied (if applicable).
Furthermore, the HarmonicSS Integrated Cohort will be described through
metadata which will are presented in Table 26:
<table>
<tr>
<th>
**Metadata Element**
</th>
<th>
**Sub-element**
</th>
<th>
**Sub-element**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
Creator(s)
</td>
<td>
Name
</td>
<td>
</td>
<td>
The name of the dataset creator
</td> </tr>
<tr>
<td>
Affiliation
</td>
<td>
</td>
<td>
The affiliation of the dataset creator
</td> </tr>
<tr>
<td>
Date of Creation
</td>
<td>
</td>
<td>
</td>
<td>
The date that the dataset was built
</td> </tr>
<tr>
<td>
Data collection period
</td>
<td>
From
</td>
<td>
</td>
<td>
The first chronologically date of any record within the dataset
</td> </tr>
<tr>
<td>
To
</td>
<td>
</td>
<td>
The latest chronologically date of any record within the dataset
</td> </tr>
<tr>
<td>
Format
</td>
<td>
</td>
<td>
</td>
<td>
The format of the dataset (e.g., xml, SQL, RDF)
</td> </tr>
<tr>
<td>
Conditions of data use
</td>
<td>
Who
</td>
<td>
Username
</td>
<td>
A user who can use the dataset under the specific conditions set in this
element.
</td> </tr>
<tr>
<td>
Role
</td>
<td>
A role who can use the dataset under the specific conditions set in this
element.
</td> </tr>
<tr>
<td>
Time Period
</td>
<td>
From
</td>
<td>
Starting date of conditions applied
</td> </tr>
<tr>
<td>
To
</td>
<td>
End date of conditions applied
</td> </tr>
<tr>
<td>
Actions Allowed
</td>
<td>
</td>
<td>
The actions which are set under this specific condition.
</td> </tr>
<tr>
<td>
Other Terms
</td>
<td>
</td>
<td>
Other terms related to this condition of data use.
</td> </tr>
<tr>
<td>
Related Publications
</td>
<td>
Author
</td>
<td>
</td>
<td>
The name of the publication author
</td> </tr>
<tr>
<td>
Affiliation
</td>
<td>
</td>
<td>
The affiliation of the publication author
</td> </tr>
<tr>
<td>
Publication Title
</td>
<td>
</td>
<td>
The title of the publication
</td> </tr>
<tr>
<td>
Editor
</td>
<td>
</td>
<td>
The editor of the publication
</td> </tr>
<tr>
<td>
Link to Publication
</td>
<td>
</td>
<td>
The website link at which the publication is published.
</td> </tr> </table>
_Table 26 Integrated Cohort Metadata_
_Additional metadata capturing the value of the integrated cohort_ will
include the following combined fields as presented in Table 27.
<table>
<tr>
<th>
**Metadata Element**
</th>
<th>
**Sub-element**
</th>
<th>
**Sub-element**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
Data User
</td>
<td>
Name
</td>
<td>
</td>
<td>
The name of the individual having used the data.
</td> </tr>
<tr>
<td>
Affiliation
</td>
<td>
</td>
<td>
The affiliation of the individual having used the data.
</td> </tr>
<tr>
<td>
Goal(s) of Data Use
</td>
<td>
</td>
<td>
</td>
<td>
The list of goals aimed at from the data use.
</td> </tr>
<tr>
<td>
Way of Data Use
</td>
<td>
Step ID
</td>
<td>
</td>
<td>
The ID of the step for capturing the order of the actions taken.
</td> </tr>
<tr>
<td>
Service Name
</td>
<td>
</td>
<td>
The list of HarmonicSS services applied on the data for achieving the
aforementioned goals.
</td> </tr>
<tr>
<td>
Input (optional)
</td>
<td>
</td>
<td>
The input to the aforementioned service
</td> </tr>
<tr>
<td>
Output
(optional)
</td>
<td>
</td>
<td>
The output of the aforementioned service
</td> </tr>
<tr>
<td>
Data Use Result
</td>
<td>
</td>
<td>
</td>
<td>
The result of the data use
</td> </tr>
<tr>
<td>
Link to Result
</td>
<td>
</td>
<td>
</td>
<td>
The link to the website with the details of the result (e.g., a publication),
if available
</td> </tr> </table>
_Table 27 Additional Metadata for Integrated Cohort_
Furthermore, metadata will also accompany each individual dataset being part
of the integrated cohort as depicted in the Table 28:
<table>
<tr>
<th>
**Metadata Element**
</th>
<th>
**Sub-element**
</th>
<th>
**Sub-element**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
Dataset ID
</td>
<td>
</td>
<td>
</td>
<td>
A unique identifier of the dataset (see also the respective field in section
4.1 tables)
</td> </tr>
<tr>
<td>
Creator(s)
</td>
<td>
Name
</td>
<td>
</td>
<td>
The name of the dataset creator
</td> </tr>
<tr>
<td>
Affiliation
</td>
<td>
</td>
<td>
The affiliation of the dataset creator
</td> </tr>
<tr>
<td>
Date of Creation
</td>
<td>
</td>
<td>
</td>
<td>
The date that the dataset was built
</td> </tr>
<tr>
<td>
Latest Update Date
</td>
<td>
</td>
<td>
</td>
<td>
The date that the dataset was latest updated
</td> </tr>
<tr>
<td>
Data collection period
</td>
<td>
From
</td>
<td>
</td>
<td>
The first chronologically date of any record within the dataset
</td> </tr>
<tr>
<td>
To
</td>
<td>
</td>
<td>
The latest chronologically date of any record within the dataset
</td> </tr>
<tr>
<td>
Format
</td>
<td>
</td>
<td>
</td>
<td>
The format of the dataset (e.g., xml, SQL, RDF)
</td> </tr>
<tr>
<td>
Conditions of dataset use
</td>
<td>
Who
</td>
<td>
Username
</td>
<td>
A user who can use the dataset under the specific conditions set in this
element.
</td> </tr>
<tr>
<td>
Role
</td>
<td>
A role who can use the dataset under the specific conditions set
</td> </tr>
<tr>
<td>
**Metadata Element**
</td>
<td>
**Sub-element**
</td>
<td>
**Sub-element**
</td>
<td>
**Description**
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
in this element.
</td> </tr>
<tr>
<td>
Time Period
</td>
<td>
From
</td>
<td>
Starting date of conditions applied
</td> </tr>
<tr>
<td>
To
</td>
<td>
End date of conditions applied
</td> </tr>
<tr>
<td>
Actions Allowed
</td>
<td>
</td>
<td>
The actions which are set under this specific condition.
</td> </tr>
<tr>
<td>
Other Terms
</td>
<td>
</td>
<td>
Other terms related to this condition of dataset use.
</td> </tr> </table>
_Table 28 Individual Datasets’ Metadata_
In order to clearly present the value and potential of the HarmonicSS
Integrated Cohort, the documentation will also include **best practices based
on the Scenario’s Implementation and Results** , which will be structured as
follows:
Scenario title, Scenario goal(s), Scenario Implementation (flow of services
used), Scenario Result(s), Scenario Implementer(s).
In fact, these fields, along with parameters such as Scenario Implementation
Period, Scenario ID, will also serve as metadata for the scenarios’ results,
as presented in Table 29.
<table>
<tr>
<th>
**Metadata Element**
</th>
<th>
**Sub-element**
</th>
<th>
**Sub-element**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
Scenario ID
</td>
<td>
</td>
<td>
</td>
<td>
A unique identifier of the scenario
</td> </tr>
<tr>
<td>
Scenario Implementer(s)
</td>
<td>
Name
</td>
<td>
</td>
<td>
The name of the scenario implementer
</td> </tr>
<tr>
<td>
Affiliation
</td>
<td>
</td>
<td>
The affiliation of the scenario implementer
</td> </tr>
<tr>
<td>
Scenario
Implementation Period
</td>
<td>
From
</td>
<td>
</td>
<td>
The start date of the scenario implementation
</td> </tr>
<tr>
<td>
To
</td>
<td>
</td>
<td>
The end date of the scenario implementation
</td> </tr>
<tr>
<td>
Scenario’s Goals
</td>
<td>
</td>
<td>
</td>
<td>
The list of goals aimed at from the scenario implementation.
</td> </tr>
<tr>
<td>
Scenario Implementation
</td>
<td>
Step ID
</td>
<td>
</td>
<td>
The ID of the step for capturing the order of the actions taken.
</td> </tr>
<tr>
<td>
Service Name
</td>
<td>
</td>
<td>
The list of HarmonicSS services used for achieving the aforementioned goals.
</td> </tr>
<tr>
<td>
Input (optional)
</td>
<td>
</td>
<td>
The input to the aforementioned service
</td> </tr>
<tr>
<td>
Output (optional)
</td>
<td>
</td>
<td>
The output of the aforementioned service.
</td> </tr>
<tr>
<td>
Scenario Result
</td>
<td>
</td>
<td>
</td>
<td>
The result of the scenario.
</td> </tr>
<tr>
<td>
Link to Result
</td>
<td>
</td>
<td>
</td>
<td>
The link to the website with the details of the result (e.g., a publication),
if available.
</td> </tr>
<tr>
<td>
Conditions of scenario’ s results use
</td>
<td>
Who (optional)
</td>
<td>
Username
</td>
<td>
A user who can use the dataset under the specific conditions set in this
element.
</td> </tr>
<tr>
<td>
**Metadata Element**
</td>
<td>
**Sub-element**
</td>
<td>
**Sub-element**
</td>
<td>
**Description**
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
Role
</td>
<td>
A role who can use the dataset under the specific conditions set in this
element.
</td> </tr>
<tr>
<td>
Time Period (optional)
</td>
<td>
From
</td>
<td>
Starting date of conditions applied
</td> </tr>
<tr>
<td>
To
</td>
<td>
End date of conditions applied
</td> </tr>
<tr>
<td>
Actions Allowed
</td>
<td>
</td>
<td>
The actions which are set under this specific condition.
</td> </tr>
<tr>
<td>
Acknowledgement
</td>
<td>
</td>
<td>
The text that should be included for acknowledging use of the scenario’s
results.
</td> </tr>
<tr>
<td>
Other Terms
</td>
<td>
</td>
<td>
Other terms related to this condition of dataset use.
</td> </tr> </table>
_Table 29 Scenarios’ Results Metadata_
This documentation, including the metadata of the Integrated Cohort, will be
available at the project’s website as soon as it will be prepared and
finalized.
Most of the data to be integrated have undergone quality checks at the data
providers’ site. Moreover, each data provider should ensure that their data is
accurate, correct and up-todate (see D9.3). Nevertheless, further quality
checks and curation processes are performed within Task 4.1 “Guidelines and
clinical preparatory actions for completing individual cohorts”. Work within
the latter task focuses on setting the clinical preparatory actions required
for data curation purposes in an effort to ensure that all datasets integrated
are complete and valid, among others. In fact, this work will also feed Task
5.1 “Cohort Data Governance and Sharing Services” in order for the latter to
build a set of methodologies, tools and key performance indicators (KPIs) for
evaluating the data sources to be part of the HarmonicSS Integrated Cohort in
terms of usability, data quality, context and purpose, among others, with the
coordination steering committee closely monitoring the process.
# Storage and Back up
All data will be stored in the storage devices of the Okeanos private cloud
[1]. Okeanos is an IaaS (Infrastructure as a Service) located in Athens Greece
and operated from the Greek Network for Research and Technology (GRNET). Using
IaaS you can build your own computers or computer network, always connected to
the Internet, without worrying about hardware failures, spaghetti cables,
connectivity hiccups and software troubles.
The overall storage allocation including RAM and CPUs as currently allocated
in the Okeanos cloud are depicted in Figure 1.
_Figure 1 The overall storage allocation including RAM and CPUs in the Okeanos
cloud_
A storage disk of 4TeraByte is allocated for the harmonized data processing.
File storage space is a common cloud storage for data sharing between team
members that will be upgraded to 200GB (it is similar to Google Drive but for
the Okeanos case). The storage is aligned in two main categories that are a)
storage per Virtual Machine (VM) and b) global storage shared by all VMs so
that only authorized users can access the shared data. The
Okeanos cloud is based on the “synnefo” cloud framework
(https://www.synnefo.org/about/) which is an open source cloud stack that
provides services similar to Amazon Web Services (AWS). Regarding storage and
backup manipulation, for the HarmonicSS private cloud the “snf-image-creator”
family of tools is used to manipulate storage backups in holistic approach of
all VMs. With “snf-imagecreator” the total VMs of the HarmonicSS project are
backed-up and most important can be preserved or transferred to any cloud
infrastructure as VDI (VirtualBox), VHD (Microsoft Hyper-V), VMDK (VMWare),
RAWS disk format and other VM-specific image formats are supported. According
to the documentation of the “snf-image-creator” the host system itself can be
imaged while sensitive user data (user accounts, ssh keys, log files, etx)
located in the Operating System of the corresponding VM can be excluded from
the image creation. The created images can be uploaded to the overall cloud
infrastructure as private images (not visible from other cloud users) using
the “kamaki” Application Programming Interface (API) which is a tool for
managing OpenStack clouds. The “snf-image-creator” supports any Linux
distribution and can be installed even from source code by using the Python 2
framework. It is used as a command line tool or with a graphical interface,
but the command line tool is preferred. The valuable information that are
required for storage backup of any VM are:
1. The cloud account for uploading and registering the VM image
2. The short name that identifies the image
3. A short description of the image
4. The registration type that can be private or public (private is the default).
The cloud account for uploading and registering any backup storage image is
composed of an authentication URL combined with a secure TOKEN. Both URL and
TOKEN are securely visible only to HarmonicSS Cloud Admin. A typical usage
case for creating an operational image of a VM is created as following:
**snf-mkimage / -a https://accounts.okeanos.grnet.gr/identity/v2.0/ -t**
**53CUR3_T0K34 --tmpdir /largeTMP --no-sysprep -o ubuntuVMHarmonicss1.raw**
The “snf-image-creator” tool is used next for uploading the backed-up VM image
to the private global storage of the “Pithos” environment that is allocated
for the HarmonicSS project; Pithos is the Virtual Storage service of okeanos.
Pithos enables users to store their files online and access them anytime, from
anywhere even from the VMs. A screenshot of the “snf-image-creator” for
uploading a newly created backed-up image to the cloud storage is depicted in
Figure 2:
_Figure 2 A screenshot of the “snf-image-creator” for uploading a newly
created backed-up image to the cloud storage_
Any backup VM images can be also downloaded and secured by the cloud
management team of the project and most important can be imported in any other
cloud infrastructure in case such action is required.
In case critical data are imported to the system, immediate backups will be
performed. In case a physical device is damaged it will be replaced and full
recovery of data will be performed from the most recent backup (kept outside
of Okeanos private cloud). In terms of access management and security, direct
access to the data is not allowed for anyone. All data storage devices are
behind an OpenVPN [3] server. Public access and public IP address to the data
servers is prohibited. Consortium members that are connected through the
OpenVPN server, can access their own custom data (internal data required by
the developed systems, data models produced by intermediate processing, local
databases and reference data) through secure REST services so that they can
handle (read, update) the harmonized storage. Services for data deletion in
the global harmonized data storage are not allowed. Any Consortium member
(partner of the project) even connected with OpenVPN will not be able to
access the data server of the global datasets: a) no permission will be
granted and b) no user level access to the Virtual Machine of the data server
will be provided. Inside the Virtual Private Network only secure REST services
will be developed with the appropriate user grants to allow creation, viewing
and updating of data. The REST services will operate only through the OpenVPN
network and no GET services will be developed; thus disabling a possible
inference attack even inside the VPN by a Consortium member. The REST services
will be accessible by the collaborators using SSL/TLS transport protocol and
OAUTH2 [4] access tokens will enhance the secure service requests inside the
VPN. Finally, data will be kept pseudonymized by using a strong HASH
algorithm, meaning that according to current known technologies the user
identifiers will not be possible to be reversed (or decrypted as such a
process is not feasible when a hashing algorithm is used). Hence, minimum to
zero data security risks are foreseen. In fact, the use of private OpenVPN
connections behind firewall and disabling public access to the data servers
will ensure controlled access for data security purposes.
In terms of standards the consortium of the HarmonicSS projects utilizes the
following security standards and services in the Okeanos infrastructure:
SSL/TLS, OpenVPN, OAUTH2 authentication, HASH algorithms to encrypt patient
identifiers, thus making data almost anonymized for the developer teams using
the Okeanos cloud.
# Ethics and Legal Compliance
The HarmonicSS project makes its best efforts to ensure that all data related
processes, from collection and sharing to data research and sustainability
shall take place in compliance with the legal requirements established by GDPR
(General Data Protection Regulation).
## Data Protection
The ethical, data protection and IPR issues surrounding the data research in
HarmonicSS are extensively analysed in D.3.2 _Ethical, legal, privacy and IPR
requirements for data sharing and clinical trials_ (M6). In particular, the
legal requirements for data sharing (i.e. informed consent and/or Ethics
Approval (Section 4.7), the data de-identification measures (pseudonymisation
vs anonymization) (Section 4.4.4), the security obligations on part of
platform administrator and processing partners (Section 5) as well as the IPR
and ownership issues (Section 6) are covered in D3.2. Compliance with the
ethics requirements is to be ensured by WP9, which i.a. proves that data is
collected and used for HarmonicSS only after the receipt of written approval
by the Ethics Committees (D. 9.2; D.9.4); that data protection officers and/or
authorities are duly involved (D.9.4).
The contractual framework, consisting of Data Protection Memorandum of
Understanding (DP MoU) and IPR Memorandum of Understanding (IPR MOU) is
proposed for the project with the aim: (a) to clarify the data protection
roles and responsibilities of the project parties and (b) to set up
contractual mechanism for managing IPR in collaborative project results, i.a.
integrative platform and pSS cohort. The DP MoU and IPR MoU are attached to
D.3.2 as Annex 1 and 2, respectively. The conclusion of legal agreements is
planned to be finalized in line with Task 4.4 “Harmonization and individual
cohorts integration”, i.e. by M26.
Regarding data privacy and security by pseudonymisation, the two options,
namely: anonymisation and pseudonymisation, were evaluated. With regard to the
safeguards for research, Article 89 (1) GDPR advocates research on anonymous
data “ _where the research can be fulfilled by further processing which does
not permit or no longer permits the identification of data subjects_ ”;
however, the GDPR allows pseudonymisation, if anonymization would render the
fulfilment of research goals impossible. The HarmonicSS Consortium decided to
apply pseudonymisation for two main reasons: (a) among the HarmonicSS tools
there is the “Patient selection tool for multinational clinical trials” (Task
7.1) which aims at offering a mechanism for facilitating patient recruitment
for clinical trials by applying the respective eligibility criteria onto the
HarmonicSS Integrated Cohort and (b) the management of incidental findings,
dictated by the ethics of clinical research, also operates on the possibility
of linking any research findings back to the individuals. Single key vs
double-key pseudonymisation and workload associated with double-key
pseudonymisation has also been described and analysed in D.3.2, Section 4.4.4.
As already mentioned in Section 6, all data will be stored in the storage
devices of the Okeanos private cloud. UoI has already encrypted all
communication to the aforementioned infrastructure ensuring that only the
technical partners and data researchers (acting as data processors) are able
to access the technical Cloud resources. Each data processor can access only
their own Virtual Machine in the Cloud infrastructure in order to perform
strictly their own individual tasks as assigned in the HarmonicSS project
workplan. Any data access will only be available on the pseudonymised data
through the necessary security mechanisms, including secure REST services in
the Virtual Private Network (VPN) of the Cloud infrastructure. No public
access to the Virtual Machines (VMs) will be allowed, while the clinical
partners will be able to connect to the HarmonicSS user services only to
specifically dedicated Virtual Machines. The tools and services which will be
developed within the Cloud infrastructure will prohibit any direct access to
the HarmonicSS Integrated Cohort, while no delete operation will be available.
Public IP addresses will be disabled to the development of the VMs, a
limitation which will ensure that no unauthorized transmission or data breach
in any HarmonicSS data storage will be performed.
### Data protection by default
Data protection by default is applied to HarmonicSS project from the very
beginning to provide extended safeguards related with the protection of
personal data. By taking into account the nature, scope and context of
processing, the data controller is implementing data protection by design
including the purposes of processing regarding also potential risks that can
affect the rights and freedoms of patients that are participating in the
processing. All principles of article 5 of the GDPR are taken into account for
the data protection by design. Security requirements of data are presented in
section 6 while the rights of data subjects are protected with technical
processing security measures as described in Article 32 of the GDPR. To
support the data protection by design and by default in the HarmonicSS project
the following table is presented.
<table>
<tr>
<th>
**GDPR - Article 25 Obligations**
</th>
<th>
</th>
<th>
**Data Phases**
</th>
<th>
</th> </tr>
<tr>
<th>
**Collection**
**(M1-M42)**
</th>
<th>
**Processing (M15-M42)**
</th>
<th>
**Harmonization (M15-M42)**
</th> </tr>
<tr>
<td>
“the controller shall, both at the time of the determination of the means for
processing and at the time of the processing itself, implement appropriate
technical and organisational measures, such as **pseudonymisation** , which
are designed **to implement dataprotection principles, such as data
minimisation** , in an effective manner and to integrate the necessary
safeguards into the processing in order to meet the requirements of this
Regulation and protect the rights of data subjects.”
</td>
<td>
Article 5 obligations
including
consent forms
&
Protection of
Personal Rights
</td>
<td>
Pseudonymization
&
Data
Minimization
</td>
<td>
Pseudonymization
&
Data
Minimization
&
Data Retention
Policy
&
Safeguards of
Human Rights
</td> </tr>
<tr>
<td>
“The controller shall implement appropriate technical and organisational
measures for ensuring that, **by default** , only personal data which are
necessary for each specific purpose of the processing are processed. That
obligation applies to the amount of personal data collected, the extent of
their processing, the period of their storage and their accessibility.”
</td>
<td>
Amount of
Personal
Data=~14180 records
&
Extend of
Processing=28 months
&
Period of
Storage=28 months 1
</td>
<td>
Amount of
Personal
Data=~14180 records
&
Extend of
Processing=28 months
&
Period of
Storage=28 months
</td>
<td>
Amount of
Personal
Data=~14180 records
&
Extend of
Processing=28 months
&
Period of
Storage=28 months
</td> </tr>
<tr>
<td>
“In particular, such measures shall
</td>
<td>
Users=
</td>
<td>
Users=Clinicians,
</td>
<td>
Users= Clinicians,
</td> </tr> </table>
<table>
<tr>
<th>
**GDPR - Article 25 Obligations**
</th>
<th>
</th>
<th>
**Data Phases**
</th>
<th>
</th> </tr>
<tr>
<th>
**Collection**
**(M1-M42)**
</th>
<th>
**Processing (M15-M42)**
</th>
<th>
**Harmonization (M15-M42)**
</th> </tr>
<tr>
<td>
ensure that by default personal data are not made accessible without the
individual's intervention to an indefinite number of natural persons.”
</td>
<td>
Clinicians
</td>
<td>
Technical Experts
</td>
<td>
Technical Experts
</td> </tr> </table>
_Table 30 Data protection by design and by default in HarmonicSS_
## IPR and ownership
The issues of IPR and ownership with respect to the Integrative Cohort and
project infrastructure are to be clarified by contractual arrangement between
the project parties, namely by conclusion of the IPR MoU (D.3.2, Annex 2).The
separation between rights in data and rights in results produced from data has
been reformulated in Clause 3.1 IPR MoU. Pursuant to Article 24.1 EC-GA, the
Parties, who contribute clinical data into the Project, as entered in the
agreement on background, hold rights in the data. These Parties retain the
rights in the raw data they contribute throughout the duration of the Project
and allow such data to be processed in the Project under the terms as
specified in the agreement on background and/or terms for the grant of Access
Rights, as provided in Section 9, Article 9.2.6 and 9.3 CA, in particular.
As regards ownership in the HarmonicSS Integrative Cohort, qualified as
research result, it would fall under the rules of composite ownership, as
introduced by Clause 4.2 IPR MoU:
_“If Project Results are generated by two or more Parties and if contributions
of the Parties are separately identifiable and/or constitute separate and/or
independent works in themselves, but are assembled into a collective whole as
inter-dependent parts (albeit without an intent to be merged to the point of
being used as a whole only), the contributing Parties agree that such Results
constitute composite work and shall be owned by the contributing Parties
according to the contribution of each.”_
Following this definition, parties contributing pSS datasets into the project
would hold rights in the Integrative Cohort according to the contribution of
each.
It should be noted, however, that given the complexity of the project’s
activities and the accompanying legal analysis, the aforementioned issues are
still under discussion and analysis. The main issues associated with data
protection obligations and management of IPR are expected to be resolved by
M26, as part of the legal and ethical activities in WP3 and the exploitation
plan formulation in WP2.
# Data Preservation, Sharing and Access
## Data Preservation
The detailed data used in the HarmonicSS project were described in the
previous sections of the current prototype, being in accordance with the GDPR.
The data will be collected and preserved according to the rules of the GDPR.
Data will be limited and minimized to what is exactly required to carry out
the project by also taking signed consent forms for all users exhibiting the
fair and transparent processing of data. Data will be kept exactly for the
project period and the data subjects will have all the rights that the GDPR
states for them (right of access, right to erasure, right of rectification,
right to restrict processing, right to object to processing, right to data
portability); after the end of the project, the Okeanos administration team
will destroy the hard disks used by the infrastructure service while all
developed data models, techniques, methods and algorithms will be retained as
an entire system. Furthermore, the harmonized large dataset will be kept since
it will provide useful information for clinical specific cases in anonymized
form. The harmonized dataset (a main outcome of the project) will be retained
as an anonymized data repository that will be able to serve future request
cases as enforced by the project’s requirements.
During the project development and according to Article 6 (1.e) of the GDPR
stating that _“Personal data must be kept in a form that permits
identification of data subjects for no longer than is necessary for the
purposes for which the data were collected or for which they are further
processed._ ”, all data - that will be enrolled for the development phases of
the project - will have been decided in the beginning and no future data
requests will be addressed. According to the lawfulness of processing
(including consent forms at the clinical sites) and the data protection by
design, as required by the GDPR regulation, the exact data processing of all
research phases of the project using anonymized and pseudonymized data has
been decided and no future changes for research usage may occur. It should be
noted that HarmonicSS will offer the mechanisms for exporting one or more
pseudonymised data records in any human readable electronic format, provided
that such need rises for respecting any requirement of data portability.
## Data Sharing and Access
As already mentioned and described in the previous sections, any access to the
HarmonicSS Integrated Cohort will be available only through the HarmonicSS
services with no direct access to the data being allowed. Hence, the data
sharing will be conducted only in the context of HarmonicSS services usage
under the HarmonicSS scenarios implementation and within the HarmonicSS
Consortium. The HarmonicSS data documentation, which will be available at the
project’s website as soon as prepared and finalized (see Section 5), will
present the Reference Model in detail along with specific information
regarding the Results of Scenarios regarding the use the HarmonicSS data and
services.
Regarding the scenarios results (as mentioned in Section 3.2), they will be
available at the project website as soon as they are delivered and they will
be accompanied by metadata including [Scenario ID, Scenario title, Scenario
goal(s), Scenario Implementation (flow of services used), Scenario Result(s),
Scenario Implementer(s), Scenario Implementation Period]. Following the
project timeplan, the first 4 scenarios (i.e., 1) Stratification model for
classifying pSS patients, 2) Validation of existing biomarkers and discovery
of novel ones, 3) Models of lymphomagenesis, and 4) Formulation of common
clinical practices, guidelines and rules) will be available on M36 and the
health policy implementation ones will be on M42. In order for potentially
interested users to search for this data, they will be properly outlined at
the project website as well as their generation will be properly disseminated
through the project dissemination channels. Any user interested in this data
will then register to the project website as a guest and be able to go through
the scenarios results. The user will be prompted to accept specific “Terms of
usage” prior the finalization of the registration These terms will include
specific IPR protection related conditions as well as will guide the user
regarding the required acknowledgement in case of data re-usage. The latter
will also be presented as information to the user when having access to the
scenarios results.
# Responsibilities and Resources
Regarding the data management related responsibilities, the Data Control
Committee (DCC) consisting of three coordinating partners has been established
to take over the role of data controller for the project. This committee will
be in charge of implementing the DMP and assigning the relevant roles and
responsibilities (from data capture and data quality assurance to metadata
production and data archiving and sharing) as well as ensure that it is
reviewed and revised, if necessary.
The Data Control Committee (DCC) is formed by:
* **Project coordinator:** ETHNIKO KAI KAPODISTRIAKO PANEPISTIMIO ATHINON (UoA), established in 6 CHRISTOU LADA STR, ATHINA 10561, Greece, VAT number EL090145420, responsible for data protection: _Prof. Dr. Athanasios Tzioufas_ ;
* **Scientific coordinator** : PANEPISTIMIO IOANNINON (UoI), 74670, established in PANEPISTEMIOYPOLE, PANEPISTEMIO IOANNINON, IOANNINA 45110, Greece, VAT number EL090029284, responsible for data protection: _Prof. Dr. Dimitrios I. Fotiadis_ ;
* **Clinical coordinator:** UNIVERSITA DEGLI STUDI DI UDINE (DSMB), CF80014550307, established in VIA PALLADIO 8, UDINE 33100, Italy, VAT number IT01071600306, responsible for data protection: _Prof. Dr. Salvatore De Vita_
The set-up of data protection framework for the project is led by the
HarmonicSS llegal partner Leibniz Universität Hannover (LUH).
It should be noted that the list of the persons and entities serving as data
controllers, data processors and data providers who will be responsible for
handling data management issues will be included in the letter to the Data
Protection Authority (Greek Data Protection Authority), as soon as HarmonicSS
is in the phase of collecting the data.
# Conclusions
This document presented the data management plan for the HarmonicSS project,
covering both retrospective data and prospective data, either from old
patients in their follow up visits or new patients.
The document presents in detail the data collection process, the metadata and
accompanying documentation, the storage and back up mechanisms, the data
preservation, sharing and access methods as well as the ethical and legal
compliance of the DMP. Finally, the specific responsibilities regarding the
DMP implementation and review (if required) have been described.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1219_VisualMusic_731986.md
|
# 1\. INTRODUCTION
The European Commission wants to emphasize that research data is as important
as the publications it supports. As part of the stipulation that open access
to scientific publications is mandatory for all scientific publications
resulting from Horizon 2020 funded projects, projects must also aim to deposit
the research data needed to validate the results presented in the deposited
scientific publications, known as "underlying data". In order to effectively
supply this data, projects need to consider at an early stage how they are
going to manage and share the data they create or generate.
A DMP describes the data management life cycle for all data sets that will be
collected, processed or generated **under** the research project. It is a
document outlining how research data will be handled during **the initiative**
, and even after the **action** is completed, describing what data will be
collected, processed or generated and following what methodology and
standards, whether and how this data will be shared and/or made open, and how
it will be curated and preserved. The DMP is not a fixed document; it evolves
and gains more precision and substance during the lifespan of the project.
## 1.2 Document description
The Data Management Plan intends to identify the dataset which is going to be
processed, to define a general protocol to create, manage and guarantee free
access to results and data collected within the project lifespan. This
document will be periodically updated along the duration of the project.
Due to project´s nature, the type of data managed in the project can´t be
considered as sensitive beyond some contact details and answers to
questionnaires. In VisualMusic, the amount of information will be relatively
small since interest groups are established and focused on VJs and video
artists and data collection is only addressed for consultation matters.
More detailed versions of the DMP will be submitted in case any significant
change may occur such as the generation of new data sets or any potential
change in the consortium agreement.
# 2\. DATA COLLECTION
## 2.1 Data description
In VisualMusic project there are 6 different sort of data that will be
gathered and produced during the project lifetime.
* **Personal Data:** contact details from stakeholders and project partners who are taking part in either the requirements definition, any consultation procedures or else becoming a member of the On-line Community or CIAG.
* **Questionnaires:** online surveys created in order to collect feedback from industry professionals and end users about some aspects of the project that the consortia wish to confirm and validate.
* **Interviews:** in order to know more in depth customers´ expectations about the product functionalities and capabilities.
* **Graphic information:** clips created by video artists and shared among end-users. Sessions, videos, etc. created by end- users are shared among them while using the system in their own facilities.
* **Deliverables:** these documents were described in the Description of Work and accepted by the EC. According to the Workplan, these reports will be published on the Project website to be accessible for the general public. Some of the deliverables include aggregated data obtained by means of questionnaires and interviews, summing up the gathered feedback without revealing personal information from participants.
**Deliverables**
**Graphic**
**information**
**Interviews**
**Questionnaires**
**Contact**
**information**
**Figure 1. Types of Data**
Most of the datasets will be part of the information generated under the
following tasks, since these work packages involve contacting and getting
feedback from stakeholders and final users. Information obtained in WP2 and
Wp4 will mainly consist of the output resulting from questionnaires and
interviews distributed to end users. Furthermore, in WP3 different clips are
created by the video artists that will be part of the themes and sessions
created by the VJs. However, data within WP7 is generally made up of personal
contact details from potential end-users to whom forthcoming results could be
of interest.
Most of the datasets will be part of the information generated under the
following tasks, since these work packages involve contacting and getting
feedback from stakeholders and final users. Information obtained in WP2 and
WP4 will mainly consist of the output resulting from questionnaires and
interviews carried out by stakeholders. However, data within WP6 and 7 are
generally made up of personal contact details from potential end-users to whom
the forthcoming results could be of interest, the personal information won´t
be shared unless there is an explicit consent.
<table>
<tr>
<th>
**WP/Task nr.**
</th>
<th>
**WP/ Task Description**
</th>
<th>
**Responsible**
</th>
<th>
**Output**
</th> </tr>
<tr>
<td>
WP2.- User Consultations & Requirements Definitions
</td>
<td>
UNIPD
</td>
<td>
Deliverable
</td> </tr>
<tr>
<td>
Task 2.2
</td>
<td>
Identification of functionality requirements
</td>
<td>
Questionnaires/
Interviews
</td> </tr>
<tr>
<td>
Task 2.3
</td>
<td>
Identification and monitoring of user needs and interests
</td>
<td>
Questionnaires/
Interviews
</td> </tr>
<tr>
<td>
WP3.- VisualMusic Player Development, Modules and Scene Exporter
</td>
<td>
BRA
</td>
<td>
3DClips
</td> </tr>
<tr>
<td>
Task 3.6
</td>
<td>
Integration of Modules and VisualMusic Player
</td> </tr>
<tr>
<td>
WP4.- System Verification and Validation
</td>
<td>
BRA
</td>
<td>
Deliverable
</td> </tr>
<tr>
<td>
Task 4.2
</td>
<td>
VisualMusic Training and Set Up
</td> </tr>
<tr>
<td>
Task 4.3
</td>
<td>
Test Sessions and Data Collection
</td>
<td>
BRA
</td>
<td>
3DClips/Themes Questionnaires/
Deliverable
</td> </tr>
<tr>
<td>
Task 4.4
</td>
<td>
Data analysis and feedback
</td>
<td>
UNIPD
</td>
<td>
Questionnaires/
Deliverable
</td> </tr>
<tr>
<td>
WP6.-Dissemination
</td>
<td>
SRP
</td>
<td>
Deliverable
</td> </tr>
<tr>
<td>
Task 6.2.
</td>
<td>
Dissemination activities
</td>
<td>
Contact details/ videos
</td> </tr>
<tr>
<td>
WP7 - Commercial Exploitation and Business Planning
</td>
<td>
SRP
</td>
<td>
Contact details
</td> </tr>
<tr>
<td>
Task 7.1
</td>
<td>
Establish and Manage Commercial Impact Advisory Group
</td>
<td>
BRA
</td>
<td>
Contact details
</td> </tr> </table>
**Table 1. Work Packages data outcome**
## 2.2. Participants
As explained in deliverable 2.1 User Consultation Protocol and Tools, users in
the **VisualMusic** project are composed of:
* _End-users_ participating in the project, as stated in D.2.1, the user partners in the consortia are: Club Entertainment (Belgium), Link Associated (Italy); Square Rivoli Publishing (France), Oliver Huntemann (Germany).
The end-users are considered those persons/ organisations who actually will
use the VisualMusic product.
* _Commercial impact advisory group_ which is formed from a group of _professionals_ from the music industry who are not directly connected to the project, with whom it is intended to exchange a deeper analysis and discuss the commercial potential of **VisualMusic** product.
* _Users from outside the consortia._ They are stakeholders from live performance and music sector not included in the consortium but who are members of the Online Community of Interest and may become potential customers.
**End users**
**(**
**within the consortium**
**)**
**CIAG**
**Users (outside the Consortium)**
**Figure 2. Different participants’ groups involved in the VisualMusic
project**
## 2.3. Tools
### 2.3.1. Questionnaires
This is one of the main tools for collecting the data for the establishment of
the user requirements and validation. These forms have been designed by UNIPD.
An on-line questionnaire was made available on the project website and sent to
a group of potential stakeholders (i.e. end-users, CIAG, and users outside the
consortium). The contact details and the replies to individual questionnaires
will be confidential, VisualMusic will be exclusively allowed to share
aggregated information.
### 2.3.2. Interviews
To complement the data from the questionnaires, there was also a series of
face to face/Skype interviews organized by the team of researchers from UNIPD,
which were recorded for their subsequent analysis by the UNIPD. This
information will be kept as confidential.
### 2.3.3. Validation phase and Data collection
During the validation process, user experiences will be collected in the form
of questionnaires. Data collection will be based on actual user experiences
after the end-users have used VisualMusic system to create their own demos.
The emphasis lies on the practical experience and actual demos.
All end users are committed provide their feedback about their impressions
after having used VisualMusic and implementing the solution in their own
facilities or live performances. The feedback will be collected during
March–June 2018.
The end users are going to be provided with a questionnaire in which they will
have to answer a list of questions and exercises, which will be detailed in
D.4.2, to be replied after having been using VisualMusic for a certain period
of time.
It wil be important that end users also share data in the form of videos and
other visual material. The materials are intended to be submitted via the
shared repository, or if necessary, some other transportation method. The data
collection should be planned and organised based on each end users’ individual
needs. All the collected material will be compiled into a final report (D4.2)
outlining the most important conclusions that come from the validation stage.
## 2.4. Evaluation and analysis of the data
The resulting sessions created by all the user partners will contribute to
analyse the usability of the tool and refine the technology components as well
as advise the users how to optimize the use of the technology.
The conclusions obtained by means of questionnaires, interviews, etc., which
can´t be considered as sensitive, will come out publicly but compiled as
common conclusions gleaming from the validation but never at an individual
level. The collected material will be processed to both written and visual
(clips, pictures, themes from demos etc.) in the final reports in order to
keep on further development of **VisualMusic** .
End users are expected to provide questionnaires, photos etc. throughout the
different stages of the project: first about the expectations and the user
cases, then about the demo performance and concluding report about the final
demo products (was the final product what you had expected in terms of
quality, better or worse and how/why, etc).
# 3\. DOCUMENTATION AND METADATA
As explained in previous sections of the DMP, data produced in VisualMusic
will be partly the outcome of analysing questionnaires and interviews to
better know the users ‘expectations and their perception about the potential
of the product. However the data collected will also have a scientific value
and advance user study methodology in the domain of human-computer
interaction. Therefore, the anonymized, quantitative answers to the survey and
the anonymized transcripts of the interviews will be shared with the
scientific community.
Conclusions resulting from the research are going to be openly published and
summarised in the approved deliverables which their final versions will be
accessible on the project website. They are also going to be published on a
scientific journal and the anonymous data shared via a OpeAIRE-compliant
repository.
As a first stage, information is initially foreseen to be saved and backed up
on personal computers.
Additionally, file nomenclature will be according to personal criteria.
Regarding file versioning, it is intended to fulfill project policies detailed
in D.1.1.- Project Handbook.
On a second stage, the consortia has chosen Google Drive platform in order to
upload and share information enabling in this way to be accessible among
project partners. For the VisualMusic installer and codified 3DClips, a ftp
server will be used. It will allow private access to the project partners.
Concerning personal contact details, an informed consent form will be
delivered before the interviews and questionnaires to be held with users.
Regarding the Online Community of interest, the consortia keep their email
addresses in Wordpress, in order to distribute newsletters via email and only
for communication purposes. Some CIAG members gave their authorization to the
project consortia to publish their contact details and photo on the
corresponding section of the website. Information collected via questionnaires
and interviews will be published collectively but never revealing any personal
opinion.
At this stage of the project, the main formats of files containing information
are described in the following table. However, this information is subject to
future changes which will be duly updated in next versions of DMP:
<table>
<tr>
<th>
**Type of Data**
</th>
<th>
**File Format**
</th> </tr>
<tr>
<td>
3D Clips
</td>
<td>
"Python code files and
Brainstorm project files (.py,
.scn y .ldr)"
</td> </tr>
<tr>
<td>
Themes
</td> </tr>
<tr>
<td>
Questionnaires
</td>
<td>
csv
</td> </tr>
<tr>
<td>
Interviews
</td>
<td>
rtf (transcriptions)
</td> </tr>
<tr>
<td>
Videos
</td>
<td>
avi, mpeg
</td> </tr>
<tr>
<td>
Deliverables
</td>
<td>
Microsoft Word (compatible versions), Pages, PDF
</td> </tr>
<tr>
<td>
Webinars, Demo Sessions
</td>
<td>
AVI, FLT, mp4
</td> </tr>
<tr>
<td>
Contact Details (just email)
</td>
<td>
Wordpress
</td> </tr> </table>
**Table 2. File formats**
# 4\. ETHICS AND LEGAL COMPLIANCE
On the one hand, UNIPD as responsible for User consultation and Validation
process deliverables is in charge of data security and legal compliance for
the data UNIPD collects. As a public institution, the university acts in
accordance to their internal rules of Information Security Policies and fulfil
National legislation referring this matter: the American Psychological
Association code of ethics, the Association for Computing Machinery code of
ethics. For data protection, till 25 May 2018 the EU Directive 95/46/EC,
97/66/EC, the Directive 2002/58/EC on privacy and electronic communications
and the Italian Legislative Decree no. 196/2003; starting 25 May 2018, the
European General Data Protection Regulation.
Brainstorm is a certificated company under ISO:9001 and it is committed to
ensure the necessary measures to guarantee the data protection.
In deliverables, answers from respondents are not going to be singled out
individually, thereby, it will be impossible to for external people to
identify respondents answers. Data will be analysed as a whole; questionnaires
were marked with an ID (pseudonymous) and their association with identifying
information will either not be possible or not disclosed at any time.
# 5\. STORAGE AND BACK UP
Initially, data have been stored on Google Drive where all the information
will be uploaded in order to be accessible by all the consortia partners.
Google Drive is being used to back up the data and at the same time to be used
as a repository among partners to facilitate data exchange. Regarding
deliverables, they will be uploaded on the project website. User data are
stored by UNIPD within dedicated hard disks and folders accessible only to the
project team.
The onus of data storage speaking about questionnaires and interviews will be
on UNIPD but only due to practical reasons since they will be in charge of
leading the questionnaire and interview collection. Concerning demo session
video and webinars, Brainstorm will assume the responsibility of keeping save
the information. Finally yet importantly, personal information will be kept in
a personal computer with private access.
# 6\. DATA SHARING
Furthermore, public deliverables will be uploaded and accessible on due curse
on the project website section, Outcomes.
Graphic material such as demonstrations, webinars and session videos will be
uploaded on the project´s YouTube channel to be openly accessible for the
general public.
Anonymous data from questionnaires and interviews will be stored on Zenodo.
Author copies of the publications will be published on the institutional
repository of the University of Padua (Padua Research Archive - IRIS).
**7\. SELECTION AND PRESERVATION**
At this stage, the intention is to preserve and keep data at least 5 years
after the end of the project.
# 8\. RESPONSIBILITIES AND RESOURCES
As a collaborative project, data management responsibility is divided into
different persons and organisations depending on the role they have adopted in
the project:
<table>
<tr>
<th>
**Type of Data**
</th>
<th>
**Resource**
</th>
<th>
**Responsible**
</th> </tr>
<tr>
<td>
Questionnaires/ Interviews
</td>
<td>
Google Drive/Personal computer
</td>
<td>
Anna Spagnolli (UNIPD)
</td> </tr>
<tr>
<td>
Stakeholders contact details
</td>
<td>
Excel file/PC
</td>
<td>
Manuel Soler (SRP)
</td> </tr>
<tr>
<td>
Demonstrations, Webinars, user cases
</td>
<td>
YouTube channel / Ftp
</td>
<td>
Javier Montesa (Brainstorm)
</td> </tr>
<tr>
<td>
Deliverables
</td>
<td>
Google Drive/ Website/PC
</td>
<td>
Javier Montesa (Brainstorm)
</td> </tr> </table>
**Table 3. Storage resources**
Taking into consideration the nature of the data handled in the project, it is
not foreseen to need any exceptional measures in order to carry out our plan.
Moreover, no additional expertise will be required for data management.
Regarding the work to be done speaking about user data storage and back up
(interviews and questionnaires), the project has agreed to appoint task
leaders to take care of ensuring the plan commitment.
<table>
<tr>
<th>
**Task name**
</th>
<th>
**Responsible person name**
</th> </tr>
<tr>
<td>
Data collection
</td>
<td>
Anna Spagnolli (UNIPD)
</td> </tr>
<tr>
<td>
Metadata production
</td>
<td>
Anna Spagnolli (UNIPD)
</td> </tr>
<tr>
<td>
Data storage & back up
</td>
<td>
Anna Spagnolli (UNIPD)
</td> </tr>
<tr>
<td>
Data archiving & sharing
</td>
<td>
Javier Montesa (Brainstorm)
</td> </tr> </table>
**Table 4. Task leaders**
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1220_AUTOPILOT_731993.md
|
# Executive Summary
In Horizon 2020 a limited pilot action on open access to research data has
been implemented. Participating projects have been required to develop a Data
Management Plan (DMP).
This deliverable provides the third version of the Data Management Plan
elaborated by the AUTOPILOT project. The purpose of this document is to
provide an overview of the main elements of the data management policy. It
outlines how research data was handled during the AUTOPILOT project and
describes what data was collected, processed or generated and following what
methodology and standards, whether and how this data was shared and/or made
open, and how was be curated and preserved. Besides, data types list, metadata
and global data collection processes are also defined in this document.
The AUTOPILOT Data Management Plan refers to the latest EC DMP guidelines 1
. This version has explicit recommendations for full life cycle management
through the implementation of the FAIR principles, which state that the data
produced shall be Findable, Accessible, Interoperable and Reusable (FAIR).
The document first provide a kind of ontology for the data used during the
AUTOPILOT projects, generated by a diversity of data sources, like vehicles,
on board and road side sensors. The report presents the hierarchical IoT
architecture, the different data set categories and the metadata, which are
important to interpret the data content in the IoT platforms.
The report includes a detailed presentation of the methodology for the
AUTOPILOT data management, taking into account the decision of the AUTOPILOT
consortium to apply for the ORDP.
The report provides the description of the data sets structures, relating to
the different categories of IoT devices and other data sources like vehicles,
but also the specific types and data set used at the different pilot sites,
according to the use cases deployed and assessed locally.
# Introduction
## Objectives of the project
Automated driving is expected to increase safety, to provide more comfort and
to create many new business opportunities for mobility services. The Internet
of Things (IoT) is about enabling connections between objects or "things"; it
is about connecting anything, anytime, anyplace, using any service over any
network.
**AUTO** mated Driving **P** rogressed by **I** nternet **O** f **T** hings”
(AUTOPILOT) project will especially focus on utilizing the IoT potential for
automated driving.
The overall objective of AUTOPILOT is to bring together relevant knowledge and
technology from the automotive and the IoT value chains in order to develop
IoT-architectures and platforms which will bring Automated Driving towards a
new dimension. This is realized through the following main objectives:
* Use, adapt and innovate current and advanced technologies to define and implement an IoT approach for autonomous and connected vehicles
* Deploy, test and demonstrate IoT based automated driving use cases at several permanent pilot sites, in real traffic situations with: Urban driving, Highway pilot, Automated Valet Parking, Platooning and Real-time car sharing.
* Create and deploy new business products and services for fully automated driving vehicles, used at the pilot sites: by combining stakeholders’ skills and solutions, from the supply and demand side
* Evaluate with the involvement of users, public services and business players at the pilot sites :
* The suitability of the AUTOPILOT business products and services as well as the ability to create new business opportunities
* The user acceptance related to using the Internet of Things for highly or fully automated driving
* The impact on the citizens’ quality of life
* Contribute actively to standardization activities as well as to consensus building in the areas of Internet of Things and communication technologies
Automated vehicles largely rely on on-board sensors (LiDAR, radar, cameras,
etc.) to detect the environment and make reliable decisions. However, the
possibility of interconnecting surrounding sensors (cameras, traffic light
radars, road sensors, etc.) exchanging reliably redundant data may lead to new
ways to design automated vehicle systems potentially reducing cost and adding
detection robustness.
Indeed, many types of connected objects may act as an additional source of
data, which will very likely contribute to improve the efficiency of the
automated driving functions, enable new automated driving scenarios as well as
increase the automated driving function safety while providing driving data
redundancy and reducing implementation costs. These benefits will enable
pushing the SAE level of driving automation to the full automation, keeping
the driver out of the loop. Furthermore, by making autonomous cars a full
entity in the IoT, the AUTOPILOT project enables developers to create IoT/AD
services as easy as accessing any entity in the IoT.
**Figure 1 – The AUTOPILOT overall concept**
The Figure above depicts the AUTOPILOT overall concept including the different
ingredients to apply IoT to autonomous driving:
* The overall IoT platforms and architecture, allowing the use of the IoT capabilities for autonomous driving.
* The Vehicle IoT integration and platform to make the vehicle an IoT device, using and contributing to the IoT.
* The Automated Driving relevant sources of information (pedestrians, traffic lights …) becoming IoT devices and extending the IoT eco-systems to allow enhanced perception of the driving environment on the vehicle.
* The communication network using appropriate and advanced connectivity technology for the vehicle as well as for the other IoT devices.
## Purpose of the document
This deliverable presents the third version of the data management plan
elaborated for the AUTOPILOT project. The purpose of this document is to
provide an overview of the data set types present in the project and to define
the main data management policy adopted by the Consortium.
The data management plan defines how data in general and research data in
particular are handled during the research project.
It describes what data is collected, processed or generated by the IOT devices
and by all the IoT ecosystem, what methodologies and standards shall be
followed during the collection process, whether and how this data is shared
and/or made open not only for the evaluation needs but also to comply with the
ORDP requirements 2 , and how it shall be curated and preserved. Besides,
the Data management plan identifies the four (4) key processes requirements
that define the data collection process and provides first recommendations to
be applied.
The document is structured as the following. **The Chapter 2** outlines a data
overview in the AUTOPILOT project. It details AUTOPILOT data categories, data
types and metadata, then the data collection processes to be followed and
finally the test data flow and test data architecture environment.
**The Chapter 3** gives a global vision of the test data management
methodology developed in the WP3 across pilot sites.
**The Chapter 4** gives insights about the Open Research Data Pilot under
H2020 guidelines. **The Chapter 5** provides detailed description of the
datasets used in AUTOPILOT project with focus on used methodologies, standard
and data sharing policies.
**The Chapter 6** gives insights about the FAIR Data Management principle
under H2020 guidelines and how AUTOPILOT started actions in order to be FAIR
compliant.
Finally, the remaining chapters outline the necessary roles, responsibilities
and ethical issues
## Intended audience
The AUTOPILOT project addresses highly innovative concepts. As such, foreseen
intended audience of the project is the scientific community interested in the
IOT and/or automotive technologies. In addition, due to the strong expected
impact of the project on their respective domains, the other expected audience
consists in automotive industrial communities, telecom operator and
standardization organizations.
# Data in AUTOPILOT: an overview
The aim of this chapter is:
* To provide a first categorization of the data.
* To identify a list of the data types that is generated.
* To provide a list of metadata that is used to describe generated data and enable data reuse.
* To provide recommendations on data collection and sharing processes during the project and beyond.
## The AUTOPILOT hierarchical IoT architecture
The AUTOPILOT project will collect a large amount of raw data to measure the
benefit of IoT for automated driving with multiple automated driving use cases
and services, at different pilot locations.
Data from vehicles and sensors is collected and managed through a hierarchy of
IoT platforms as illustrated in Figure 2
The figure above shows a federated architecture with the following four
layers:
* **In-vehicle IoT Platforms:** Here is everything that is mounted inside the vehicle, i.e., components responsible for AD, positioning, navigation, real time sensor data analysis, and communication with the outside world. All mission critical autonomous driving functions should typically reside in this layer.
* **Road-Side IoT Platforms:** Road-side and infrastructure devices, such as cameras, traffic light sensors, etc., are integrated and managed as part of road-side IoT PFs covering different road segments and using local low latency communication networks and protocols as required by the devices and their usage.
* **Pilot Site IoT Platforms:** This layer constitutes the first integration level. It is responsible for collecting, processing and managing data at the pilot sites level.
* **Central IoT Platform:** This is a Cloud-based top layer that integrates and aggregates data from the various pilot sites as well as external services (weather, transport, etc.). This is where the common AD services such as car sharing, platooning, etc., will reside. Data, at this level, are standardized using common formats, structures and semantics. The central IoT platform is hosted on IBM infrastructure.
The data analysis is performed according to Field Operational Test studies
(FOT 3 ) and using FESTA 4 methodology. The FESTA project funded by the
European Commission developed a handbook on FOT methodology which gives
general guidance on organizational issues, methodology and procedures, data
acquisition and storage, and evaluation.
From raw data a large amount of derived data is produced to address multiple
research needs. Derived data will follow a set of transformations: cleaning,
verification, conversion, aggregation, summarization or reduction.
In any case, data must be well documented and referenced using rich metadata
in order to facilitate and foster sharing, to enable validity assessments and
to enable its usage in an efficient way.
Thus, each data must be described using additional information called
metadata. The latter must provide information about the data source, the data
transformations and the conditions in which the data has been produced. More
details about the metadata in AUTOPILOT are described in section 2.2.
## Data sets categories
The AUTOPILOT project will produce different categories of data sets.
* **Context data** : data that describe the context of an experiment (e.g. Metadata)
* **Acquired and derived data** : data that contain all the collected information from measurements and sensors related to an experiment.
* **Aggregated data** : data summary obtained by reduction of acquired data and generally used for data analysis.
### Context data
Context data is any information that helps to explain observation during a
study. Context data can be collected, generated or retrieved from existing
data. For example, it contains information such as vehicle, road or drivers
characteristics.
### Acquired and derived data
Acquired data is all data collected to be analysed during the course of the
study. Derived data is created by different types of transformations including
data fusion, filtering, classification, and reduction. Derived data are easy
to use and they contain derived measures and performance indicators referring
to a time period when specific conditions are met. This category includes
measures from sensors coming from vehicles or IOT and subjective data
collected from either the users or the environment.
The following list outlines the data types and sources that is collected:
<table>
<tr>
<th>
**In-vehicle measures** are the collected data from vehicles, either using
their original in-car sensors or sensors added for AUTOPILOT purposes. These
measures can be divided into different types:
</th> </tr>
<tr>
<td>
</td>
<td>
**Vehicle dynamics** are measurements that describe the mobility of the
vehicle. Measurements can be for example longitudinal speed, longitudinal and
lateral acceleration, yaw rate, and slip angle.
</td> </tr>
<tr>
<td>
</td>
<td>
**Driver actions** define the driver actions on the vehicle commands that can
be measured; for instance, steering wheel angle, pedal activation or HMI
button press variables, face monitoring indicators characterizing the state of
the driver, either physical or emotional.
</td> </tr>
<tr>
<td>
</td>
<td>
**In-vehicle systems state** can be accessed by connecting to the embedded
controllers. It includes continuous measures like engine RPM or categorical
values like ADAS and active safety systems activation.
</td> </tr>
<tr>
<td>
</td>
<td>
**Environment detection** is the environment data that can be obtained by
advanced sensors like RADARs, LIDARs, cameras and computer vision, or more
simple optical sensors. For instance, luminosity or presence of rain, but also
characteristics and dynamics of the infrastructure (lane width, road
curvature) and surrounding objects (type, relative distances and speeds) can
be measured from within a vehicle.
</td> </tr>
<tr>
<td>
</td>
<td>
**Vehicle positioning** the geographical location of a vehicle is determined
with satellite navigation systems (e.g. GPS) and the aforementioned advanced
sensors.
</td> </tr>
<tr>
<td>
</td>
<td>
**Media** mostly consist of video. The data consist of media data but also
index files used to synchronize the other data categories. They are also often
collected from the road side.
</td> </tr>
<tr>
<td>
**Continuous subjective measures** Complimentary to sensors and
instrumentation, some continuous measures can also be built in a more
subjective way, by analysts or annotators, notably using video data.
</td> </tr>
<tr>
<td>
**Road-side measures** are the vehicle’s counting speed measurement and
positioning, using radar, rangefinders, inductive loops or pressure hose. In
ITS systems, it may also contain more complex information remotely transferred
from vehicles to road-side units.
</td> </tr>
<tr>
<td>
**Experimental conditions** are the external factors which may have an impact
on participants’ behaviour. They may be directly collected during the
experiment, or integrated from external sources. Typical examples are traffic
density and weather conditions.
</td> </tr>
<tr>
<td>
**IOT data** are the external sources of data that is collected/shared through
IOT services:
</td> </tr>
<tr>
<td>
</td>
<td>
**Users Data** can be generated by smartphones or wearables. The users can be
the pedestrians or the car drivers. These data helps the user experience for
the usage of services by vehicle or infrastructure. The privacy aspects are
well explained in chapter 4\.
</td> </tr>
<tr>
<td>
</td>
<td>
**Infrastructure Data** are all the data giving additional information about
the environment. Typical examples are the traffic status, road-works,
accidents and road conditions. They can also be directly collected from Road-
side cameras or traffic light control units and then transferred to IOT
Platforms. For instance, the infrastructure data can transfer hazard warnings
or expected occupancy of busses on bus lanes to vehicles using communication
networks.
</td> </tr>
<tr>
<td>
</td>
<td>
**In-Vehicle data** defines the connected devices or sensors in vehicles.
Typical examples are navigation status, time distance computations, real-time
pickup / drop-off information for customers, and events detected by car to be
communicated to other vehicles or GPS data to be transferred to maps.
</td> </tr>
<tr>
<td>
**Surveys data** are data resulting from the answers of surveys and
questionnaires for user acceptance evaluation
</td> </tr> </table>
### Aggregated data
Aggregated data is generally created in order to answer the initial research
question. They are supposed to be verified and cleaned, thus facilitating
their usage for analysis purposes.
Aggregated data contains a specific part of the acquired or derived data (e.g.
the average speed during a trip or the number of passes through a specific
intersection). Its smaller size allows a simple storage in e.g. database
tables and an easy usage suitable for data analysis. To obtain aggregated
data, several data reduction processes are performed. The reduction process
summarizes the most important aspects in the data into a list of relevant
parameters or events, through one or all of the following processes:
validation, curation, conversion, annotation.
Besides helping in answering new research questions, aggregated data may be
re-used with different statistical algorithms without the need to use raw
data. For AUTOPILOT, aggregated data will represent the most important data
types that is shared by the project as described in section 4 Participation in
the open research data pilot. It does not allow potentially problematic re-
uses because it does not contain instantaneous values that would highlight
illegal behaviour of a vehicle, a driver or another subsystem.
## Metadata
### General principles
This section reviews the relevant metadata standards developed or used in the
previous and ongoing field operational tests (FOT) and naturalistic driving
studies (NDS) as a basis for the development of the metadata specifications of
the pilot data. Such standards will help the analysis and re-use of the
collected data within the AUTOPILOT project and beyond.
The text in this section is derived from the work done in the FOT-Net Data
project 4 for sharing data from field operational tests. The results of this
work are described in the Data Sharing Framework 6 . The CARTRE project 7
is currently updating this document to specifically addressing road automation
pilots and FOTs.
As described in the previous sections, the pilots will generate and collect a
large amount of raw and processed data from continuous data-logging, event-
based data collection, and surveys. The collected data is analysed and used
for various purposes in the project including the impact assessment carried
out by partners who are not involved in the pilots. This is a typical issue
encountered in many FOT/NDS projects in which the data analyst (or re-user)
needs to know how the raw data was collected and processed in order to perform
data analysis, modelling and interpretation. Therefore, good metadata is
vital.
The Data Sharing Framework defines metadata as ‘ **any information that is
necessary in order to use or properly interpret data** ’. The aim of this
section is to address these issues and to provide methods to efficiently
describe a dataset and its associated metadata. It results in suggestions for
good practices in documenting a data collection and datasets in a structured
way. Following the definition of metadata by the data sharing framework, we
divide the AUTOPILOT’s Metadata into four different categories as follows.
* **AUTOPILOT pilot design and execution** documentation, which corresponds to a high level description of a data collection: its initial objectives and how they were met, description of the test site, etc.
* **Descriptive** metadata, which describes precisely each component of the dataset, including information about its origin and quality.
* **Structural** metadata, which describes how the data is being organized.
* **Administrative** metadata, which sets the conditions for how the data can be accessed and how this is being implemented.
Field Operational Tests (FOTs) have been carried out worldwide and adopted
different metadata formats to manage the collected data. One of good examples
is the ITS Public Data Hub hosted by the US Department of Transport 5 .
There are over 100 data sets created using ITS technologies. The data sets
contain various types of information, such as highway detector data, travel
times, traffic signal timing data, incident data, weather data, and connected
vehicle data, many of them will also be collected in the AUTOPILOT data. The
ITS Public Data Hub uses ASTM 2468-05 standard format for metadata to support
archived data management systems. This standard would be a good start point to
design metadata formats for various types of operational data collected by the
IoT devices and connected vehicles in AUTOPILOT.
In a broader context of metadata standardisation, there are a large number of
metadata standards available which address the needs of particular user
communities. The Digital Curation Centre (DCC) provides a comprehensive list
of metadata standards 9 for various disciplines such as general research
data, physical science as well as social science & humanities. It also lists
software tools that have been developed to capture or store metadata
conforming to a specific standard.
### IOT metadata
The metadata describing IoT data are specified in the context of OneM2M
standard 10 . In such context “Data” signifies digital representations of
anything. In practice, that digital representation is associated to a
“container” resource having specific attributes. Those attributes are both
metadata describing the digital object itself, and the values of the variables
of that object, which are called “content”.
Every time an IoT device publishes new data on the OneM2M platform a new
“content instance” is generated, representing the actual status of that
device. All the “content instances” are stored in the internal database with a
unique resource ID.
The IOT metadata are describing the structure of the information, according to
the OneM2M standard. The IoT metadata are described in the Table 1below.
#### Table 1 – OneM2M Metadata for IoT data 6
<table>
<tr>
<th>
**Metadata Element**
</th>
<th>
**Extended name**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
pi
</td>
<td>
parentID
</td>
<td>
ResourceID of the parent of this resource.
</td> </tr>
<tr>
<td>
ty
</td>
<td>
resourceType
</td>
<td>
Resource Type attribute identifies the type of the resource as specified in
clause. For E.g. “4 (contentInstance)”.
</td> </tr>
<tr>
<td>
ct
</td>
<td>
creationTime
</td>
<td>
Time/date of creation of the resource.
This attribute is mandatory for all resources and the value is assigned by the
system at the time when the resource is locally created. Such an attribute
cannot be changed.
</td> </tr>
<tr>
<td>
ri
</td>
<td>
resourceID
</td>
<td>
This attribute is an identifier for the resource that is used for 'non-
hierarchical addressing method', i.e. this attribute contains the
'Unstructured-CSErelative-Resource-ID' format of a resource ID as defined in
table 7.2-1 of [5].
This attribute is provided by the Hosting CSE when it accepts a resource
creation procedure. The Hosting CSE assign a resourceID which is unique in
that CSE.
</td> </tr>
<tr>
<td>
rn
</td>
<td>
resourceName
</td>
<td>
This attribute is the name for the resource that is used for 'hierarchical
addressing method' to represent the parent-child relationships of resources.
See clause 7.2 in [5] for more details.
</td> </tr>
<tr>
<td>
lt
</td>
<td>
lastModifiedTime
</td>
<td>
Last modification time/date of the resource. The lastModifiedTime value is
updated when the resource is updated.
</td> </tr>
<tr>
<td>
et
</td>
<td>
expirationTime
</td>
<td>
Time/date after which the resource is deleted by the Hosting CSE.
</td> </tr>
<tr>
<td>
acpi
</td>
<td>
accessControlPolicyIDs
</td>
<td>
The attribute contains a list of identifiers of an <accessControlPolicy>
resource. The privileges defined in the <accessControlPolicy> resource that
are referenced determine who is allowed to access the resource containing this
attribute for a specific purpose (e.g. Retrieve, Update, Delete, etc.).
</td> </tr>
<tr>
<td>
lbl
</td>
<td>
label
</td>
<td>
Tokens used to add meta-information to resources. This attribute is optional.
The value of the labels attribute is a list of individual labels, that can be
used for example for discovery
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
purposes when looking for particular resources that one can "tag" using that
label-key.
</td> </tr>
<tr>
<td>
st
</td>
<td>
stateTag
</td>
<td>
An incremental counter of modification on the resource. When a resource is
created, this counter is set to 0, and it is incremented on every modification
of the resource
</td> </tr>
<tr>
<td>
cs
</td>
<td>
contentSize
</td>
<td>
Size in bytes of the content attribute.
</td> </tr>
<tr>
<td>
cr
</td>
<td>
creator
</td>
<td>
The ID of the entity (Application Entity or Common Services Entity) which
created the resource containing this attribute
</td> </tr>
<tr>
<td>
cnf
</td>
<td>
contentinfo
</td>
<td>
Information on the content that is needed to understand the content. This
attribute is a composite attribute. It is composed first of an Internet Media
Type (as defined in the IETF RFC 6838) describing the type of the data, and
second of an encoding information that specifies how to first decode the
received content. Both elements of information are separated by a separator
defined in OneM2M TS-0004 [3].
</td> </tr>
<tr>
<td>
or
</td>
<td>
ontologyRef
</td>
<td>
This attribute is optional.
A reference (URI) of the ontology used to represent the information that is
stored in the contentInstances resources of the <container> resource. If this
attribute is not present, the contentInstance resource inherits the
ontologyRef from the parent <container> resource if present.
</td> </tr> </table>
# Data management methodology in AUTOPILOT
The AUTOPILOT data collection process and data management is built upon
requirements coming from 4 processes:
* **The evaluation requirement** defines the minimum data that must be collected in order to perform the evaluation process at the end of the project
* **The test specification** provides details about the data to be collected on the basis of the evaluation requirements and according to use cases specifications
* **The test data management** defines the data collection, harmonization, storage and sharing requirements using the first two processes and the ORDP process
* **The Open Research Data Pilot** 7 **(ORDP)** defines the requirement related to data sharing of research data
## Evaluation process requirements
AUTOPILOT applies an evaluation methodology based on FESTA ( _Field
opErational teSt_ _supporT Action_ ) . This methodology consist of collecting
relevant set of data to carry out the assessment of hypothesis relating to the
benefit or improvement provided by IT solutions for the driving of vehicles,
in the context of AUTOPILOT, Automated Driving.
The Figure 3 below shows a high level view of the data that is collected and
integrated in the evaluation process. Different types of data (in blue) are
collected, stored and analysed by different processes.
To fulfil the project objectives, a design of experiment is performed during
the evaluation task. This design creates requirements that define the number
of scenarios and test cases, the duration of tests and test runs, the number
of situations per specific event, the number of test vehicles, the variation
in users, the variation in situations (weather, traffic, etc.). Each pilot
site must comply with this design of experiment and provide sufficient and
meaningful data with the required quality level to enable technical
evaluation.
## Tests specification process requirements
The pilot tests specification Task T3.1 plays a major role that must be
thoroughly followed. Indeed, this task will convert the high level
requirements defined in the evaluation process into specific and detailed
specifications of data formats, data size, data currencies, data units, data
files, and storage. The list of requirements is defined for each of the
following items: Pilot sites, Scenarios, Test Cases, Measures, Parameters,
Data quality, etc. and is described in deliverable D3.1. All the development
tasks of WP2 must implement completely, if impacted, the requirement defined
in D3.1 in order to provide all the data (test data) as expected by the
technical evaluation.
## Open research data pilot requirement process
Additional requirements related to ORDP (Open Research Data Pilot) are defined
in this document to guarantee that the collected data is provided in
compliance to European Commission Guidelines 8 on Data Management in Horizon
2020. Those requirements are deeply defined and explained in chapter 4 .
## Test data management methodology
The main objective of the data management plan is to define the methodology to
be applied in AUTOPILOT across all pilot sites, in particular test data
management. This includes the explanation of the common data collection and
integration methodology.
One of the main objectives within T3.4 “Test Data management” is to ensure the
comparability and consistency of collected data across pilot sites. In this
context, the methodology is highly impacted by the pilot sites specifications
of the Task 3.1 and compliant with the evaluation methodologies developed in
the task 4.1. In particular, Technical evaluation primarily needs log data
from the vehicles, IoT platforms, cloud services and situational data from
pilot sites to detect situations and events, and to calculate indicators.
The log data parameters that are needed for technical evaluation are organized
by data sources (vehicle sources, vehicle data, derived data, positioning, V2X
messages, IoT messages, events, situations, surveys and questionnaires)
For IoT data, some pilot sites use proprietary IoT platforms in order to
collect specific produced IoT data with specific devices or vehicles (e.g. the
Brainport car sharing service and automated valet parking service use Watson
IoT Platform™ to collect data from their vehicles).
On the top of that, we have OneM2M interoperability platform in each pilot
site. This is the interoperability IoT platform for exchanging IoT messages
relevant to all autonomous driving
(AD) vehicles at pilot site level. Then, the test data is stored in pilot site
test server storage that will contain mainly the vehicle data, IoT data and
surveys data. Further, the test data is packaged and sent to the AUTOPILOT
central storage that will allow evaluators access all the pilot site data in a
common format. This includes the input from all pilot sites and use cases and
for all test scenarios and test runs.
Every pilot site has its own test storage server for data collection
(distributed data management), named PSTS (Pilot Site Test Server). In
addition, there is a central storage server where data from all pilot sites is
stored for evaluation and analysis, named CTS (Centralized Test Server).
Please note that the CTS and the PSTS are resources of the project, not
available for public use.
The following figure represents the data management methodology and
architecture used in AUTOPILOT across all pilot sites.
**Figure 4 – Generic scheme of data architecture in AUTOPILOT**
# Participation in the open research data pilot
The AUTOPILOT project has agreed to participate in the Pilot on Open Research
Data in Horizon 2020 14 . The project uses specific Horizon 2020 guidelines
associated with ‘open’ access to ensure that the results of the project
results provide the greatest impact possible.
AUTOPILOT will ensure the open access 15 to all peer-reviewed scientific
publications relating to its results and will provide access to the research
data needed to validate the results presented in deposited scientific
publications.
The following lists the minimum fields of metadata that should come with an
AUTOPILOT project-generated scientific publication in a repository:
* The terms: “European Union (EU)”, “Horizon 2020”
* Name of the action (Research and Innovation Action)
* Acronym and grant number (AUTOPILOT, 731993)
* Publication date
* Length of embargo period if applicable
* Persistent identifier
When referencing Open access data, AUTOPILOT will include at a minimum the
following statement demonstrating EU support (with relevant in-formation
included into the repository metadata):
\- “This project has received funding from the European Union’s Horizon 2020
research and innovation program under grant agreement No 731993”.
The AUTOPILOT consortium will strive to make many of the collected datasets
open access. When this is not the case, the data sharing section for that
particular dataset will describe why access has been restricted. (See. Chapter
5)
In regards to the specific repositories available to the AUTOPILOT consortium,
numerous project partners maintain institutional repositories that is listed
in the following DMP version, where project scientific publications and in
some instances, research data is deposited. The use of a specific repository
will depend primarily on the primary creator of the publication and on the
data in question.
Some other project partners will not operate publically accessible
institutional repositories. When depositing scientific publications they shall
use either a domain specific repository or use the EU recommended service
OpenAIRE (http://www.openaire.eu) as an initial step to finding resources to
determine relevant repositories.
Project research data shall be deposited to the online data repository ZENODO
16 . It is a free service developed by CERN under the EU FP7 project
OpenAIREplus (grant agreement no.283595).
The repository shall also include information regarding the software, tools
and instruments
14
http://ec.europa.eu/research/participants/docs/h2020-funding-guide/cross-
cutting-
issues/open-access-dissemination_en.htm 15
http://ec.europa.eu/research/participants/docs/h2020-funding-guide/cross-
cutting-
issues/open-access-data-management/open-access_en.htm 16 https://zenodo.org/
that were used by the dataset creator(s) so that secondary data users can
access and then validate the results.
The AUTOPILOT data collection can be accessed in ZENODO repository at the
following address: _https://zenodo.org/communities/autopilot_
Note that publishing of data into Zenodo is made by data producers during the
project and after the end of the project.
In summary, as a baseline AUTOPILOT partners shall deposit:
* Scientific publications – on their respective institute repositories in addition (when relevant) to the AUTOPILOT Zenodo repository
* Research data – to the AUTOPILOT Zenodo collection (when possible)
* Other project output files – to the AUTOPILOT Zenodo collection (when relevant)
# AUTOPILOT dataset description
## General Description
This section provides an explanation of the different types of data sets
produced or collected in AUTOPILOT project.
The descriptions of the different data sets, including their reference, file
format, standards, methodologies and metadata and repository to be used are
given below. These descriptions are collected using the pilot sites
requirements and specifications.
It is important to note that the dataset bellow is produced by each use case
in all the pilot sites. The dataset categories are:
* IoT dataset
* Vehicle dataset
* V2X messages dataset
* Surveys dataset
All Pilot Sites have agreed about logged parameters. Therefore, following
table describes all the parameters used across Pilot Sites: most are shared, a
subset can be specific to a Pilot Site or a Use Case.
## Template used in dataset description
This table is a template used to describe the datasets.
### Table 2 – Dataset description template
<table>
<tr>
<th>
Dataset Reference
</th>
<th>
**AUTOPILOT_PS_UC_datatype_ID**
Each data set will have a reference that is generated by the combination of
the name of the project, the pilot site (PS) and the use case in which it is
generated.
**Example** : AUTOPILOT_Versailles_Platooning_IOT_01
</th> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
Nature of the data set
</td> </tr>
<tr>
<td>
Dataset Description
</td>
<td>
Each data set will have a full data description explaining the data
provenance, origin and usefulness. Reference may be made to existing data that
could be reused.
</td> </tr>
<tr>
<td>
Standards and metadata
</td>
<td>
The metadata attributes list and standards. The used methodologies
</td> </tr>
<tr>
<td>
File format
</td>
<td>
All the format that defines data
</td> </tr>
<tr>
<td>
Data Sharing
</td>
<td>
Explanation of the sharing policies related to the data set between the next
options: **Open** : Open for public disposal
**Embargo** : It iscome public when the embargo period applied by the
publisher is over. In case it is categorized as embargo the end date of the
embargo period must be written in DD/MM/YYYY format.
**Restricted** : Only for project internal use.
Each data set must have its distribution license.
Provide information about personal data and mention if the
</td> </tr>
<tr>
<td>
</td>
<td>
data is anonymized or not. Tell if the dataset entails personal data and how
this issue is taken into account.
</td> </tr>
<tr>
<td>
Archiving and Preservation
</td>
<td>
The preservation guarantee and the data storage during and after the project
**Example** : databases, institutional repositories, public repositories.
</td> </tr> </table>
## IOT dataset
**Table 3 – IOT dataset description**
<table>
<tr>
<th>
Dataset Reference
</th>
<th>
**AUTOPILOT_PS_UC_IOT_ID**
</th> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
IOT data generated from connected devices
</td> </tr>
<tr>
<td>
Dataset Description
</td>
<td>
This dataset refer to the IOT datasets that is generated from IOT devices
within use cases. This includes the data coming from VRUs, RSUs, smartphones,
Vehicles, drones …
</td> </tr>
<tr>
<td>
Standards and metadata
</td>
<td>
During the project, The metadata related to the IOT data are based on OneM2M
standard. The OneM2M IOT platforms are implemented across pilot sites to cover
the interoperability feature. More details are provided in the section 2.2.2.
In addition, the data model of these data is inspired from the DMAG (data
management activity group) work done in T2.3. The DMAG defined a unified data
model that standardizes all the IOT messages across pilot sites. The AUTOPILOT
common IOT data model is based on different standards: SENSORIS, DATEX II.
After the project, The metadata is enriched with ZENODO’s metadata, including
the title, creator, date, contributor, pilot site, use case, description,
keywords, format, resource type, etc…
</td> </tr>
<tr>
<td>
File format
</td>
<td>
JSON, CSV
</td> </tr>
<tr>
<td>
Data Sharing
</td>
<td>
This dataset is widely open to be used by 3rd party applications and is
deposited in the ZENODO repository.
</td> </tr>
<tr>
<td>
Archiving and Preservation
</td>
<td>
During the project, the data will first be stored in the IOT platform. Then,
the data is transferred to the pilot site test server before finishing up in
the centralized test server. At the end of the project, the data set is
archived and preserved in ZENODO repositories.
</td> </tr> </table>
### IOT parameters description
**Table 4 – IoT parameters description**
<table>
<tr>
<th>
**Name**
</th>
<th>
**Type**
</th>
<th>
**Range**
</th>
<th>
**Unit**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
rowid
</td>
<td>
serial
</td>
<td>
0..
</td>
<td>
[N/A]
</td>
<td>
sequence of row numbers to uniquely identify a log line by <log_stationid,
log_timestamp, rowid>, only
necessary when a subtable is logged
</td> </tr>
<tr>
<td>
log_timestamp
</td>
<td>
long
</td>
<td>
from 0 to
4398046511103
</td>
<td>
msec
</td>
<td>
timestamp at which the log_stationid logs (writes) the data row. elapsed
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
(= 2⁴²-1)
</td>
<td>
</td>
<td>
time since midnight January 1st 1970 UTC
</td> </tr>
<tr>
<td>
log_stationid
</td>
<td>
long
</td>
<td>
from 0 to 4294967295
(= 2³²-1)
</td>
<td>
[N/A]
</td>
<td>
unique id of the host (e.g. stationid, server id, IoT platform or device id,
cloud service id, ...) that logs this log data row. Log_stationid can be
another host than the source
generating the data to be logged
</td> </tr>
<tr>
<td>
log_action
</td>
<td>
enum
</td>
<td>
['SENT', 'RECEIVED']
</td>
<td>
[N/A]
</td>
<td>
Action that triggered the logging event. (Enum: 'SENT', 'RECEIVED')
</td> </tr>
<tr>
<td>
log_communicationprofile
</td>
<td>
enum
</td>
<td>
['ITS_G5',
'CELLULAR',
'UWB',
'LTE_V2X']
</td>
<td>
[N/A]
</td>
<td>
Communication profile, medium or path used to send or receive the message.
This needs to be logged in case messages are communicated via alternative
profiles. Default is ITS_G5. multiple channels are used to communicate similar
messages
</td> </tr>
<tr>
<td>
log_messagetype
</td>
<td>
enum
</td>
<td>
['ETSI.CAM',
'ETSI.DENM',
'ISO.IVI',
'ETSI.MAPEM',
'ETSI.SPATEM']
‘IoT.IOT-
OBJECT’
</td>
<td>
[N/A]
</td>
<td>
Type of standardised message, used for automated processing in case multiple
message types are combined in a single log file. The enum fields refer to the
<standardisation organisation>.<message type>.
</td> </tr>
<tr>
<td>
data
</td>
<td>
string
</td>
<td>
</td>
<td>
[N/A]
</td>
<td>
JSON containing data extracted from IoT platform, according to Use Case.
</td> </tr> </table>
## Vehicles dataset
**Table 5 – Vehicles dataset description**
<table>
<tr>
<th>
Dataset Reference
</th>
<th>
**AUTOPILOT_PS_UC_VEHICLES_ID**
</th> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
Data generated from the vehicle sensors.
</td> </tr>
<tr>
<td>
Dataset Description
</td>
<td>
This dataset refer to the vehicle datasets that is generated from the vehicle
sensors within use cases. This includes the data coming from the CAN bus,
cameras, RADARs, LIDARs and GPS.
Following log types can be produced by vehicles :
* Vehicle
* Positioning system
* Vehicle dynamics
* Driver vehicle interaction
* Environment sensors
</td> </tr>
<tr>
<td>
Standards and metadata
</td>
<td>
The vehicle data standards used in AUTOPILOT are developed in the task 2.1.
The pilot site implementations are based on well-known standards to come up
with a common data format: CAN, ROS … More details are provided in D2.1. After
the project, The metadata is based on ZENODO’s metadata, including the title,
creator, date, contributor, pilot
</td> </tr>
<tr>
<td>
</td>
<td>
site, use case, description, keywords, format, resource type, etc…
</td> </tr>
<tr>
<td>
File format
</td>
<td>
XML, CSV, SQL, JSON
</td> </tr>
<tr>
<td>
Data Sharing
</td>
<td>
This dataset is widely open to be used by 3rd party applications and is
deposited in the ZENODO repository.
</td> </tr>
<tr>
<td>
Archiving and Preservation
</td>
<td>
During the project, the data will first be stored in pilot site test server
before finishing up in the centralized test server. At the end of the project,
the data set is archived and preserved in ZENODO repositories.
</td> </tr> </table>
### Vehicles parameters description
All Pilot Sites have agreed about logged parameters. Therefore, following
table describes all the parameters used across Pilot Sites: most are common; a
subset can be specific to a Pilot Site or a Use Case.
First table describes the common metadata for the data that is logged with
every parameter
#### Table 6 – Vehicle common metadata description
<table>
<tr>
<th>
**Name**
</th>
<th>
**Type**
</th>
<th>
**Range**
</th>
<th>
**Unit**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
rowid
</td>
<td>
serial
</td>
<td>
0..
</td>
<td>
[N/A]
</td>
<td>
sequence of row numbers to uniquely identify a log line by <log_stationid,
log_timestamp, rowid>, only necessary when a subtable is logged
</td> </tr>
<tr>
<td>
log_timestamp
</td>
<td>
long
</td>
<td>
from 0 to 4398046511103
(= 2⁴²-1)
</td>
<td>
msec
</td>
<td>
timestamp at which the log_stationid logs (writes) the data row. elapsed time
since midnight January 1st 1970 UTC
</td> </tr>
<tr>
<td>
log_stationid
</td>
<td>
long
</td>
<td>
from 0 to 4294967295
(= 2³²-1)
</td>
<td>
[N/A]
</td>
<td>
unique id of the host (e.g. stationid, server id, IoT platform or device id,
cloud service id, ...) that logs this log data row. Log_stationid can be
another host than the source generating the data to be logged
</td> </tr>
<tr>
<td>
log_applicationid
</td>
<td>
long
</td>
<td>
from 0 to 4294967295
(= 2³²-1)
</td>
<td>
[N/A]
</td>
<td>
unique id of the application, instance or thread, on the log_stationid host
that logs this log data row. Applicationid is at least unique within the
log_station. ApplicationId is mandatory if multiple components on a host log
to the same table or if the application logging into a table is not trivial
(e.g. it is trivial that a CAM Basic Service is the only application logging
CAM messages in the cam table). For vehicle data, the log_applicationid is
also used to identify specific physical and virtual sensors, such as
front_camera, radar, lidar, GPS, CAN
</td> </tr> </table>
#### Table 7 – Vehicle parameters description
<table>
<tr>
<th>
**Name**
</th>
<th>
**Type**
</th>
<th>
**Range**
</th>
<th>
</th>
<th>
**Unit**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
speed
</td>
<td>
doub
le
</td>
<td>
from 0
163.82
</td>
<td>
to
</td>
<td>
[m/s]
</td>
<td>
Speed over ground, meters per second.
</td> </tr>
<tr>
<td>
outsidetempera ture
</td>
<td>
doub
le
</td>
<td>
from -60 to 67
</td>
<td>
[°C]
</td>
<td>
Vehicle outside temperature during trip.
</td> </tr>
<tr>
<td>
insidetemperat ure
</td>
<td>
doub
le
</td>
<td>
from -60 to 67
</td>
<td>
[°C]
</td>
<td>
Vehicle inside temperature during trip.
</td> </tr>
<tr>
<td>
batterysoc
</td>
<td>
doub
le
</td>
<td>
from 0 to
100
</td>
<td>
[%]
</td>
<td>
Percentage of the battery of the vehicle.
</td> </tr>
<tr>
<td>
rangeestimated
</td>
<td>
doub
le
</td>
<td>
from 0 to
1000
</td>
<td>
[km]
</td>
<td>
Range estimated with the actual percentage of the battery and/or available
fuel.
</td> </tr>
<tr>
<td>
fuelconsumptio n
</td>
<td>
doub
le
</td>
<td>
from 0 to 1
</td>
<td>
[L/k m]
</td>
<td>
Average fuel consumption during a route or trip.
</td> </tr>
<tr>
<td>
enginespeed
</td>
<td>
int
</td>
<td>
from 0 to 10000
</td>
<td>
[1/mi
n]
</td>
<td>
Engine speed calculated in terms of revolutions per minute.
</td> </tr>
<tr>
<td>
owndistance
</td>
<td>
doub
le
</td>
<td>
from 0 to
5000
</td>
<td>
[km]
</td>
<td>
Total kilometrage per day or trip or road type etc.
</td> </tr> </table>
**Table 8 – Positioning system parameters description**
<table>
<tr>
<th>
**Name**
</th>
<th>
**Type**
</th>
<th>
**Range**
</th>
<th>
**Unit**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
speed
</td>
<td>
double
</td>
<td>
from 0 to
163.82
</td>
<td>
[m/s]
</td>
<td>
Speed over ground, meters per second. Measured by GNSS receiver.
</td> </tr>
<tr>
<td>
longitude
</td>
<td>
double
</td>
<td>
from -90 to 90
</td>
<td>
[degree]
</td>
<td>
Longitude
</td> </tr>
<tr>
<td>
latitude
</td>
<td>
double
</td>
<td>
from -180 to 180
</td>
<td>
[degree]
</td>
<td>
Latitude
</td> </tr>
<tr>
<td>
heading
</td>
<td>
double
</td>
<td>
from 0 to
360
</td>
<td>
[degree]
</td>
<td>
Heading
</td> </tr>
<tr>
<td>
ggasente nce
</td>
<td>
string
</td>
<td>
</td>
<td>
[NMEA format]
</td>
<td>
GGA - Fix information.
</td> </tr>
<tr>
<td>
gsasente nce
</td>
<td>
string
</td>
<td>
</td>
<td>
[NMEA format]
</td>
<td>
GSA - Overall Satellite data.
</td> </tr>
<tr>
<td>
rmcsente nce
</td>
<td>
string
</td>
<td>
</td>
<td>
[NMEA format]
</td>
<td>
RMC - recommended minimum data for gps.
</td> </tr>
<tr>
<td>
vtgsente nce
</td>
<td>
string
</td>
<td>
</td>
<td>
[NMEA format]
</td>
<td>
VTG - Vector track an Speed over the Ground.
</td> </tr>
<tr>
<td>
zdasente nce
</td>
<td>
string
</td>
<td>
</td>
<td>
[NMEA format]
</td>
<td>
ZDA - Date and Time.
</td> </tr> </table>
**Table 9 – Vehicle dynamics description**
<table>
<tr>
<th>
**Name**
</th>
<th>
**Type**
</th>
<th>
**Range**
</th>
<th>
**Unit**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
yawrate
</td>
<td>
doub
le
</td>
<td>
from -327.66 to 327.66
</td>
<td>
[°/s]
</td>
<td>
Vehicle rotation around the centre of mass of the empty vehicle. The leading
sign denotes the direction of rotation. The value is negative if the motion is
clockwise when viewing from the top.
</td> </tr>
<tr>
<td>
acclateral
</td>
<td>
doub
le
</td>
<td>
from -16 to 16
</td>
<td>
[m/s
2]
</td>
<td>
Lateral acceleration of the vehicle.
</td> </tr>
<tr>
<td>
acclongitudinal
</td>
<td>
doub
le
</td>
<td>
from -16 to 16
</td>
<td>
[m/s
2]
</td>
<td>
Longitudinal acceleration of the vehicle.
</td> </tr>
<tr>
<td>
accvertical
</td>
<td>
doub
le
</td>
<td>
from -16 to 16
</td>
<td>
[m/s
2]
</td>
<td>
Vertical acceleration of the vehicle.
</td> </tr>
<tr>
<td>
speedwheelunitd istance
</td>
<td>
doub
le
</td>
<td>
from 0 to
163.82
</td>
<td>
[m/s]
</td>
<td>
Sensor on free running wheel for increased accuracy. Speed measured from
wheels (???).
</td> </tr>
<tr>
<td>
lanechange
</td>
<td>
enu m
</td>
<td>
[ 'NO' 'YES']
</td>
<td>
[N/A]
</td>
<td>
Lane change detection.
</td> </tr>
<tr>
<td>
speedlimit
</td>
<td>
int
</td>
<td>
from 0 to
150
</td>
<td>
[km/
h]
</td>
<td>
Maximum legal speed limit (log_applicationid identifies the source: updated in
real time, from map information, from traffic sign).
</td> </tr> </table>
**Table 10 – Driver vehicle interaction**
<table>
<tr>
<th>
**Name**
</th>
<th>
**Type**
</th>
<th>
**Range**
</th>
<th>
</th>
<th>
**Unit**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
throttlestatu
s
</td>
<td>
int
</td>
<td>
from 0 100
</td>
<td>
to
</td>
<td>
[%]
</td>
<td>
Position of the throttle pedal (% pushed). Modify to boolean (i.e., 0->NOT
PUSHED, 1-> PUSHED) if % is not available on the car.
</td> </tr>
<tr>
<td>
clutchstatus
</td>
<td>
int
</td>
<td>
from 0 100
</td>
<td>
to
</td>
<td>
[%]
</td>
<td>
Position of the clutch pedal (% pushed). Modify to boolean (i.e., 0->NOT
PUSHED, 1-> PUSHED) if % is not available on the car.
</td> </tr>
<tr>
<td>
brakestatus
</td>
<td>
int
</td>
<td>
from 0 100
</td>
<td>
to
</td>
<td>
[%]
</td>
<td>
Position of the brake pedal (% pushed). Modify to boolean (i.e., 0->NOT
PUSHED, 1-> PUSHED) if % is not available on the car.
</td> </tr>
<tr>
<td>
brakeforce
</td>
<td>
double
</td>
<td>
from 0 300
</td>
<td>
to
</td>
<td>
[bar]
</td>
<td>
Measure of master cylinder pressure.
</td> </tr>
<tr>
<td>
wipersstatus
</td>
<td>
enum
</td>
<td>
[ 'OFF' 'ON']
</td>
<td>
[N/A]
</td>
<td>
Position of the windscreen wipers (boolean). Extend the enumeration if more
details are available (e.g., ['OFF', 'SLOW', 'FAST'], ['OFF', 'SLOW1',
'SLOW2', 'FAST1', 'FAST2']).
</td> </tr>
<tr>
<td>
steeringwhe
el
</td>
<td>
double
</td>
<td>
from -720 to 720
</td>
<td>
[°]
</td>
<td>
Position of the steering wheel.
</td> </tr>
<tr>
<td>
adaptivecrui se controlstate
</td>
<td>
enum
</td>
<td>
[ 'OFF' 'ON']
</td>
<td>
[N/A]
</td>
<td>
ACC activated (ON) / or not (OFF)
</td> </tr>
<tr>
<td>
adaptivecrui se controlsetsp eed
</td>
<td>
double
</td>
<td>
>0.0
</td>
<td>
[m/s]
</td>
<td>
Speed target setting of ACC
</td> </tr> </table>
**Table 11 – Environment sensors absolute**
<table>
<tr>
<th>
**Name**
</th>
<th>
**Type**
</th>
<th>
**Range**
</th>
<th>
**Unit**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
longitude
</td>
<td>
doub
le
</td>
<td>
from -90 to 90
</td>
<td>
[degree]
</td>
<td>
Main object transformed to geolocalized coordinates longitudinal
(log_applicationid identifies the sensor providing this measurement (e.g.,
camea, lidar, radar,...)).
</td> </tr>
<tr>
<td>
latitude
</td>
<td>
doub
le
</td>
<td>
from -180 to 180
</td>
<td>
[degree]
</td>
<td>
Main object transformed to geolocalized coordinates lateral position
(log_applicationid identifies the sensor providing this measurement (e.g.,
camea, lidar, radar,...)).
</td> </tr>
<tr>
<td>
obstacle_ID
</td>
<td>
int
</td>
<td>
from 0 to 1000
</td>
<td>
[-]
</td>
<td>
ID of the obstacle detected by
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
environmental sensors
</th> </tr>
<tr>
<td>
obstacle_covaria nce
</td>
<td>
float 64
</td>
<td>
</td>
<td>
</td>
<td>
Covariance matrix of positions of lon, lat, altitude of RADAR detected objects
</td> </tr>
<tr>
<td>
ObjectClass
</td>
<td>
int
</td>
<td>
from 0 to 65
</td>
<td>
[-]
</td>
<td>
65 classes from Mapillary dataset --> see
http://research.mapillary.com/publication/ iccv17a/
</td> </tr>
<tr>
<td>
lanewidthsensor based
</td>
<td>
doub
le
</td>
<td>
from 0 to 10
</td>
<td>
[m]
</td>
<td>
Lane width measured by on-board sensor(s).
</td> </tr>
<tr>
<td>
lanewidthmapba sed
</td>
<td>
doub
le
</td>
<td>
from 0 to 10
</td>
<td>
[m]
</td>
<td>
Lane width from map information.
</td> </tr>
<tr>
<td>
trafficsigndescrip tion
</td>
<td>
strin g
</td>
<td>
</td>
<td>
[N/A]
</td>
<td>
signrecognition -- as defined in IVI - ISO TS 19321 (2015) v1
</td> </tr>
<tr>
<td>
speedlimit_sign
</td>
<td>
doub
le
</td>
<td>
from 0 to 250
</td>
<td>
[km/h]
</td>
<td>
signrecognition -- as defined in IVI - ISO TS 19321 (2015) v1
</td> </tr>
<tr>
<td>
servicecategory
</td>
<td>
enu m
</td>
<td>
['dangerWarni
ng',
'regulatory',
'informative', 'publicFacilitie
s',
'ambientCondi
tion',
'roadCondition
' ]
</td>
<td>
[N/A]
</td>
<td>
signrecognition -- as defined in IVI - ISO TS 19321 (2015) v1
</td> </tr>
<tr>
<td>
servicecategoryc ode
</td>
<td>
int
</td>
<td>
[11, 12, 13, 21,
31, 32 ]
</td>
<td>
[N/A]
</td>
<td>
signrecognition -- as defined in IVI - ISO TS 19321 (2015) v1
</td> </tr>
<tr>
<td>
countrycode
</td>
<td>
strin g
</td>
<td>
</td>
<td>
[N/A]
</td>
<td>
signrecognition -- as defined in IVI - ISO TS 19321 (2015) v1 (ISO 3166-1
alpha-2)
</td> </tr>
<tr>
<td>
pictogramcategor ycode
</td>
<td>
int
</td>
<td>
from 0 to 999
</td>
<td>
[N/A]
</td>
<td>
signrecognition -- as defined in IVI - ISO TS 19321 (2015) v1
</td> </tr>
<tr>
<td>
VRU_pedestrian_
class
</td>
<td>
int
</td>
<td>
from 0 - 3
</td>
<td>
1 = children, 2 = adults, 3 = elderly
</td>
<td>
sub classes of pedestrians
</td> </tr>
<tr>
<td>
VRU_cyclist_class
</td>
<td>
int
</td>
<td>
from 0 - 3
</td>
<td>
1 = children, 2 = adults, 3 = elderly
</td>
<td>
sub classes of cyclists/riders
</td> </tr>
<tr>
<td>
confidence_level
s
</td>
<td>
doub
le
</td>
<td>
from 0 - 100
</td>
<td>
[%]
</td>
<td>
Indication for false positive detections (minimum default level)
</td> </tr>
<tr>
<td>
Environ_info
</td>
<td>
int
</td>
<td>
from 1 - 6
</td>
<td>
[-]
</td>
<td>
1=sunny/day, 2=raining/day, 3=snow/day,
4=night/dry, 5=raining/night, 6=snow/night
</td> </tr>
<tr>
<td>
Road_hazard
</td>
<td>
int
</td>
<td>
from 0 to 42
</td>
<td>
[N/A]
</td>
<td>
No standardized dataset available --> current proposal: pothole detection,
slippery road, black ice etc.
</td> </tr>
<tr>
<td>
sensor_position
</td>
<td>
int
</td>
<td>
from 0 to 1000
</td>
<td>
[mm]
</td>
<td>
position of sensor on vehicle wrt. CoG. required for correlating to
environmental
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
detetction with Iot detections
</td> </tr>
<tr>
<td>
process_delay
</td>
<td>
int
</td>
<td>
from 0 to 1000
</td>
<td>
[ms]
</td>
<td>
is processing delay known or unknown?
</td> </tr> </table>
**Table 12 – Environment sensors relative**
<table>
<tr>
<th>
**Name**
</th>
<th>
**Type**
</th>
<th>
**Range**
</th>
<th>
**Unit**
</th>
<th>
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
x
</td>
<td>
doub
le
</td>
<td>
from 0 to 500
</td>
<td>
[m]
</td>
<td>
</td>
<td>
Main object relative distance longitudinal / x-direction (log_applicationid
identifies the sensor providing this measurement (e.g., camea, lidar,
radar,...)).
</td> </tr>
<tr>
<td>
y
</td>
<td>
doub
le
</td>
<td>
from -50 to 50
</td>
<td>
[m]
</td>
<td>
</td>
<td>
Main object relative distance lateral / ydirection (log_applicationid
identifies the sensor providing this measurement (e.g., camea, lidar,
radar,...)).
</td> </tr>
<tr>
<td>
obstacle_ID
</td>
<td>
int
</td>
<td>
from 0 to 1000
</td>
<td>
[-]
</td>
<td>
</td>
<td>
ID of the obstacle detected by
environmental sensors
</td> </tr>
<tr>
<td>
obstacle_covaria nce
</td>
<td>
float 64
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
Covariance matrix of positions of lon, lat, altitude of RADAR detected objects
</td> </tr>
<tr>
<td>
ObjectClass
</td>
<td>
int
</td>
<td>
from 0 to 65
</td>
<td>
[-]
</td>
<td>
</td>
<td>
65 classes from Mapillary dataset --> see
http://research.mapillary.com/publication/ iccv17a/
</td> </tr>
<tr>
<td>
lanewidthsensor based
</td>
<td>
doub
le
</td>
<td>
from 0 to 10
</td>
<td>
[m]
</td>
<td>
</td>
<td>
Lane width measured by on-board sensor(s).
</td> </tr>
<tr>
<td>
lanewidthmapba sed
</td>
<td>
doub
le
</td>
<td>
from 0 to 10
</td>
<td>
[m]
</td>
<td>
</td>
<td>
Lane width from map information.
</td> </tr>
<tr>
<td>
trafficsigndescrip tion
</td>
<td>
strin g
</td>
<td>
</td>
<td>
[N/A]
</td>
<td>
</td>
<td>
signrecognition -- as defined in IVI - ISO TS 19321 (2015) v1
</td> </tr>
<tr>
<td>
speedlimit_sign
</td>
<td>
doub
le
</td>
<td>
from 0 to 250
</td>
<td>
[km/h]
</td>
<td>
</td>
<td>
signrecognition -- as defined in IVI - ISO TS 19321 (2015) v1
</td> </tr>
<tr>
<td>
servicecategory
</td>
<td>
enu m
</td>
<td>
['dangerWarni
ng',
'regulatory',
'informative', 'publicFacilitie
s',
'ambientCondi tion',
'roadCondition
' ]
</td>
<td>
[N/A]
</td>
<td>
</td>
<td>
signrecognition -- as defined in IVI - ISO TS 19321 (2015) v1
</td> </tr>
<tr>
<td>
servicecategoryc ode
</td>
<td>
int
</td>
<td>
[ 11, 12, 13,
21, 31, 32 ]
</td>
<td>
[N/A]
</td>
<td>
</td>
<td>
signrecognition -- as defined in IVI - ISO TS 19321 (2015) v1
</td> </tr>
<tr>
<td>
countrycode
</td>
<td>
strin g
</td>
<td>
</td>
<td>
[N/A]
</td>
<td>
</td>
<td>
signrecognition -- as defined in IVI - ISO TS 19321 (2015) v1 (ISO 3166-1
alpha-2)
</td> </tr>
<tr>
<td>
pictogramcategor ycode
</td>
<td>
int
</td>
<td>
from 0 to 999
</td>
<td>
[N/A]
</td>
<td>
</td>
<td>
signrecognition -- as defined in IVI - ISO TS 19321 (2015) v1
</td> </tr>
<tr>
<td>
VRU_pedestrian_
class
</td>
<td>
int
</td>
<td>
from 0 - 3
</td>
<td>
1. =
children,
2. =
adults,
3. =
elderly
</td>
<td>
sub classes of pedestrians
</td> </tr>
<tr>
<td>
VRU_cyclist_class
</td>
<td>
int
</td>
<td>
from 0 - 3
</td>
<td>
1. =
children,
2. =
adults,
3. =
elderly
</td>
<td>
sub classes of cyclists/riders
</td> </tr>
<tr>
<td>
confidence_level
s
</td>
<td>
doub
le
</td>
<td>
from 0 - 100
</td>
<td>
[%]
</td>
<td>
</td>
<td>
Indication for false positive detections (minimum default level)
</td> </tr>
<tr>
<td>
Environ_info
</td>
<td>
int
</td>
<td>
from 1 - 6
</td>
<td>
[-]
</td>
<td>
</td>
<td>
1=sunny/day, 2=raining/day, 3=snow/day,
4=night/dry, 5=raining/night, 6=snow/night
</td> </tr>
<tr>
<td>
Road_hazard
</td>
<td>
int
</td>
<td>
from 0 to 42
</td>
<td>
[N/A]
</td>
<td>
</td>
<td>
No standardized dataset available --> current proposal: pothole detection,
slippery road, black ice etc.
</td> </tr>
<tr>
<td>
sensor_position
</td>
<td>
int
</td>
<td>
from 0 to 1000
</td>
<td>
[mm]
</td>
<td>
</td>
<td>
position of sensor on vehicle wrt. CoG. required for correlating to
environmental detetction with Iot detections
</td> </tr>
<tr>
<td>
process_delay
</td>
<td>
int
</td>
<td>
from 0 to 1000
</td>
<td>
[ms]
</td>
<td>
</td>
<td>
is processing delay known or unknown?
</td> </tr> </table>
## V2X messages dataset
**Table 13 – V2X messages dataset description**
<table>
<tr>
<th>
Dataset Reference
</th>
<th>
**AUTOPILOT_PS_UC_V2X_ID**
</th> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
V2X messages communicated during test sessions
</td> </tr>
<tr>
<td>
Dataset Description
</td>
<td>
This dataset refer to the V2X messages that is generated from the
communication between the vehicles and any other party that could affect the
vehicle. This includes the other vehicles and the pilot site infrastructure.
</td> </tr>
<tr>
<td>
Standards and metadata
</td>
<td>
The V2X messages are mainly generated from the communication standard ITS-G5.
After the project, The metadata is enriched by ZENODO’s metadata, including
the title, creator, date, contributor, pilot site, use case, description,
keywords, format, resource type, etc…
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CAM, DEMN, IVI, SPAT, MAP, CSV
</td> </tr>
<tr>
<td>
Data Sharing
</td>
<td>
This dataset is widely open to be used by 3rd party applications and is
deposited in the ZENODO repository.
</td> </tr>
<tr>
<td>
Archiving and Preservation
</td>
<td>
During the project, the data will first be stored in pilot site test server
before finishing up in the centralized test server. At the end of the project,
the data set is archived and preserved in ZENODO repositories.
</td> </tr> </table>
### V2X parameters description
**Table 14 – V2X parameters description**
<table>
<tr>
<th>
**Name**
</th>
<th>
**Type**
</th>
<th>
**Range**
</th>
<th>
**Unit**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
rowid
</td>
<td>
serial
</td>
<td>
0..
</td>
<td>
[N/A]
</td>
<td>
sequence of row numbers to uniquely identify a log line by <log_stationid,
log_timestamp, rowid>, only
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
necessary when a subtable is logged
</th> </tr>
<tr>
<td>
log_timestamp
</td>
<td>
long
</td>
<td>
from 0 to 4398046511103
(= 2⁴²-1)
</td>
<td>
msec
</td>
<td>
timestamp at which the log_stationid logs (writes) the data row. elapsed time
since midnight January 1st 1970 UTC
</td> </tr>
<tr>
<td>
log_stationid
</td>
<td>
long
</td>
<td>
from 0 to 4294967295
(= 2³²-1)
</td>
<td>
[N/A]
</td>
<td>
unique id of the host (e.g. stationid, server id, IoT platform or device id,
cloud service id, ...) that logs this log data row. Log_stationid can be
another host than the source
generating the data to be logged
</td> </tr>
<tr>
<td>
log_applicationid
</td>
<td>
long
</td>
<td>
from 0 to 4294967295
(= 2³²-1)
</td>
<td>
[N/A]
</td>
<td>
unique id of the application, instance or thread, on the log_stationid host
that logs this log data row.
Applicationid is at least unique within the log_station. ApplicationId is
mandatory if multiple components on a host log to the same table or if the
application logging into a table is not trivial (e.g. it is trivial that a CAM
Basic Service is the only application logging CAM messages in the cam table).
For vehicle data, the log_applicationid is also used to identify specific
physical and virtual sensors, such as front_camera, radar, lidar, GPS, CAN
</td> </tr>
<tr>
<td>
log_action
</td>
<td>
enum
</td>
<td>
['SENT', 'RECEIVED']
</td>
<td>
[N/A]
</td>
<td>
Action that triggered the logging event. (Enum: 'SENT', 'RECEIVED')
</td> </tr>
<tr>
<td>
log_communicationprofile
</td>
<td>
enum
</td>
<td>
['ITS_G5',
'CELLULAR',
'UWB',
'LTE_V2X']
</td>
<td>
[N/A]
</td>
<td>
Communication profile, medium or path used to send or receive the message.
This needs to be logged in case messages are communicated via alternative
profiles. Default is ITS_G5. multiple channels are used to communicate similar
messages
</td> </tr>
<tr>
<td>
log_messagetype
</td>
<td>
enum
</td>
<td>
['ETSI.CAM',
'ETSI.DENM',
'ISO.IVI',
'ETSI.MAPEM',
'ETSI.SPATEM']
</td>
<td>
[N/A]
</td>
<td>
Type of standardised message, used for automated processing in case multiple
message types are combined in a single log file. The enum fields refer to the
<standardisation organisation>.<message type>.
</td> </tr>
<tr>
<td>
log_messageuuid
</td>
<td>
uuid
</td>
<td>
</td>
<td>
[N/A]
</td>
<td>
Universal Unique Identifier of the message. This is an alternative for the
identification of messages from the message contents. If used, then the uuid
should also be included in the payload of the message and communicated between
senders and receivers.
</td> </tr>
<tr>
<td>
data
</td>
<td>
string
</td>
<td>
</td>
<td>
[N/A]
</td>
<td>
CAM or DENM message payload
</td> </tr> </table>
## Application messages dataset
**Table 15 – Applications dataset description**
<table>
<tr>
<th>
Dataset Reference
</th>
<th>
**AUTOPILOT_PS_UC_APPLICATION_ID**
</th> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
Application data collected during test sessions
</td> </tr>
<tr>
<td>
Dataset Description
</td>
<td>
This data refers to the data generated by AD applications.
</td> </tr>
<tr>
<td>
Standards and metadata
</td>
<td>
The application messages are mainly text application status exchanged between
AD applications and services.
After the project, The metadata is enriched by ZENODO’s metadata, including
the title, creator, date, contributor, pilot site, use case, description,
keywords, format, resource type, etc…
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CSV
</td> </tr>
<tr>
<td>
Data Sharing
</td>
<td>
This dataset is widely open to be used by 3rd party applications and is
deposited in the ZENODO repository.
</td> </tr>
<tr>
<td>
Archiving and Preservation
</td>
<td>
During the project, the data will first be stored in pilot site test server
before finishing up in the centralized test server. At the end of the project,
the data set is archived and preserved in ZENODO repositories.
</td> </tr> </table>
### Application messages parameters description
**Table 16 – Platooning event parameters description**
<table>
<tr>
<th>
**Name**
</th>
<th>
**Type**
</th>
<th>
**Range**
</th>
<th>
</th>
<th>
**Unit**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
rowid
</td>
<td>
serial
</td>
<td>
0..
</td>
<td>
</td>
<td>
[N/A]
</td>
<td>
sequence of row numbers to uniquely identify a log line by <log_stationid,
log_timestamp, rowid>
</td> </tr>
<tr>
<td>
log_timestamp
</td>
<td>
long
</td>
<td>
from 0
4398046511103
(= 2⁴²-1)
</td>
<td>
to
</td>
<td>
[millisecond]
</td>
<td>
timestamp at which the
log_stationid logs (writes) the data row. elapsed time since midnight January
1st 1970 UTC
</td> </tr>
<tr>
<td>
log_stationid
</td>
<td>
long
</td>
<td>
from 0
4294967295
(= 2³²-1)
</td>
<td>
to
</td>
<td>
[N/A]
</td>
<td>
unique id of the host (stationid or processing unit id) that logs this log
data row. Log_stationid can be another host than the source generating the
data to be logged
</td> </tr>
<tr>
<td>
log_applicationid
</td>
<td>
long
</td>
<td>
from 0
4294967295
(= 2³²-1)
</td>
<td>
to
</td>
<td>
[N/A]
</td>
<td>
unique id of the application, instance or thread, on the log_stationid host
that logs this log data row. Applicationid is at least unique within the
log_station. ApplicationId is mandatory if multiple components on a host log
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
to the same table or if the application logging into a table is not trivial
(e.g. it is trivial that a CAM Basic Service is the only application logging
CAM messages in the cam table).
</td> </tr>
<tr>
<td>
eventtypeid
</td>
<td>
enum
</td>
<td>
</td>
<td>
</td>
<td>
event type is defined in the EventModels tables, e.g. for GLOSA
</td> </tr>
<tr>
<td>
eventid
</td>
<td>
long
</td>
<td>
</td>
<td>
</td>
<td>
unique id defined by the stationid. Note that this cannot be specific to an
application within a station
</td> </tr>
<tr>
<td>
action
</td>
<td>
enum
</td>
<td>
['VEHICLE',
'PLATOON_SERVICE']
</td>
<td>
[N/A]
</td>
<td>
Action that triggered the logging event. (Enum: 'SENT', 'RECEIVED')
</td> </tr> </table>
**Table 17 – Platooning action parameters description**
<table>
<tr>
<th>
**Name**
</th>
<th>
**Type**
</th>
<th>
**Range**
</th>
<th>
**Unit**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
rowid
</td>
<td>
serial
</td>
<td>
0..
</td>
<td>
[N/A]
</td>
<td>
sequence of row numbers to uniquely identify a log line by
<log_stationid, log_timestamp, rowid>
</td> </tr>
<tr>
<td>
log_timestamp
</td>
<td>
long
</td>
<td>
from 0 to
4398046511103
(= 2⁴²-1)
</td>
<td>
[millisecond]
</td>
<td>
timestamp at which the
log_stationid logs (writes) the data row. elapsed time since midnight January
1st 1970
UTC
</td> </tr>
<tr>
<td>
log_stationid
</td>
<td>
long
</td>
<td>
from 0 to 4294967295
(= 2³²-1)
</td>
<td>
[N/A]
</td>
<td>
unique id of the host (stationid or processing unit id) that logs this log
data row. Log_stationid can be another host than the source generating the
data to be logged
</td> </tr>
<tr>
<td>
log_applicationid
</td>
<td>
long
</td>
<td>
from 0 to 4294967295
(= 2³²-1)
</td>
<td>
[N/A]
</td>
<td>
unique id of the application, instance or thread, on the log_stationid host
that logs this log data row. Applicationid is at least
unique within the log_station. ApplicationId is mandatory if multiple
components on a host log to the same table or if the application logging into
a table is not trivial (e.g. it is trivial that a CAM Basic
Service is the only application logging CAM messages in the cam table).
</td> </tr>
<tr>
<td>
eventid
</td>
<td>
long
</td>
<td>
</td>
<td>
</td>
<td>
unique id defined by the
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
stationid. Note that this cannot be specific to an
application within a station
</td> </tr>
<tr>
<td>
eventmodelid
</td>
<td>
int
</td>
<td>
</td>
<td>
</td>
<td>
unique id of the event model from the EventModels sheet
</td> </tr>
<tr>
<td>
eventactionid
</td>
<td>
int
</td>
<td>
</td>
<td>
</td>
<td>
unique id of the action from the EventModels sheet
</td> </tr>
<tr>
<td>
platooningserviceid
</td>
<td>
?
</td>
<td>
</td>
<td>
</td>
<td>
PlatooningService to subscribe to
</td> </tr>
<tr>
<td>
leaderstationid
</td>
<td>
long
</td>
<td>
from 0 to 4294967295
(= 2³²-1)
</td>
<td>
[N/A]
</td>
<td>
stationid from platoon leader, e.g. from PlatoonFormation
</td> </tr>
<tr>
<td>
platoonid
</td>
<td>
long
</td>
<td>
from 0 to 4294967295
(= 2³²-1)
</td>
<td>
[N/A]
</td>
<td>
platoonid e.g. from
PlatoonFormation message
</td> </tr>
<tr>
<td>
generationtime stamputc
</td>
<td>
long
</td>
<td>
from 0 to
9223372036854775807 or from 0 to
4398046511103 (= 2⁴²1) [millisecond]
</td>
<td>
[millisecond]
</td>
<td>
generationtimestampITC from PlatoonFormation message
</td> </tr> </table>
## Surveys dataset
### Table 18 – Surveys dataset description
<table>
<tr>
<th>
Dataset Reference
</th>
<th>
**AUTOPILOT_PS_UC_SURVEYS_ID**
</th> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
Surveys data collected during test sessions
</td> </tr>
<tr>
<td>
Dataset Description
</td>
<td>
This data refers to the data resulting from the answers of surveys and
questionnaires for user acceptance evaluation.
</td> </tr>
<tr>
<td>
Standards and metadata
</td>
<td>
Surveys data will use some well-known tools (Google Forms, Survey Monkey …)
The work of definition a common format for surveys data is still in progress
by the user acceptance evaluation team.
After the project, The metadata is enriched by ZENODO’s metadata, including
the title, creator, date, contributor, pilot site, use case, description,
keywords, format, resource type, etc…
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CSV, PDF, XLS
</td> </tr>
<tr>
<td>
Data Sharing
</td>
<td>
This dataset is widely open to be used by 3rd party applications and is
deposited in the ZENODO repository. It is important to note that these data is
**anonymized** before data sharing.
</td> </tr>
<tr>
<td>
Archiving and Preservation
</td>
<td>
During the project, the data will first be stored in pilot site test server
before finishing up in the centralized test server. At the end of the project,
the data set is archived and preserved in ZENODO repositories.
</td> </tr> </table>
## Brainport datasets
### Platooning
**Table 19 – Brainport platooning datasets description**
<table>
<tr>
<th>
**AUTOPILOT_BrainPort_Platooning_DriverVehicleInteraction**
</th> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
Data extracted from the CAN of the vehicle
</td> </tr>
<tr>
<td>
Dataset
Description
</td>
<td>
This dataset contains e.g. throttlestatus, clutchstatus, brakestatus,
brakeforce, wipersstatus, steeringwheel for the vehicle
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CSV
</td> </tr>
<tr>
<td>
**AUTOPILOT_BrainPort_Platooning_EnvironmentSensorsAbsolute**
</td> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
Data extracted from the vehicle environment sensors
</td> </tr>
<tr>
<td>
Dataset
Description
</td>
<td>
This dataset contains information about detected object, with absolute
coordinates
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CSV
</td> </tr>
<tr>
<td>
**AUTOPILOT_BrainPort_Platooning_EnvironmentSensorsRelative**
</td> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
Data extracted from the vehicle environment sensors
</td> </tr>
<tr>
<td>
Dataset
Description
</td>
<td>
This dataset contains information about detected object, with relative
coordinates
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CSV
</td> </tr>
<tr>
<td>
**AUTOPILOT_BrainPort_Platooning_IotVehicleMessage**
</td> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
Data sent between all devices, vehicles and services
</td> </tr>
<tr>
<td>
Dataset
Description
</td>
<td>
Each sensor data submission is a Message. A Message has an Envelope, a Path,
and optionally (but likely) Path Events and optionally Path Media. The
envelope bears fundamental information about the individual sender (the
vehicle) but not to a level that owner of the vehicle can be identified or
different messages can be identified that originate from a single vehicle.
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CSV
</td> </tr>
<tr>
<td>
**AUTOPILOT_BrainPort_Platooning_PlatoonFormation**
</td> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
Data sent from PlatoonService to vehicle
</td> </tr>
<tr>
<td>
Dataset
Description
</td>
<td>
This dataset contains information about the route and speed for a specific
vehicle for forming a platoon
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CSV
</td> </tr>
<tr>
<td>
**AUTOPILOT_BrainPort_Platooning_PlatooningAction**
</td> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
Data logged by vehicle
</td> </tr>
<tr>
<td>
Dataset
Description
</td>
<td>
This dataset contains information about the current status of the platooning
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CSV
</td> </tr>
<tr>
<td>
**AUTOPILOT_BrainPort_Platooning_PlatooningEvent**
</td> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
Data logged by vehicle
</td> </tr>
<tr>
<td>
Dataset
Description
</td>
<td>
This dataset contains information about the identifiers used for each specific
platooning event
</td> </tr> </table>
<table>
<tr>
<th>
File format
</th>
<th>
CSV
</th> </tr>
<tr>
<td>
**AUTOPILOT_BrainPort_Platooning_PlatoonStatus**
</td> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
Data sent by vehicle to PlatoonService
</td> </tr>
<tr>
<td>
Dataset
Description
</td>
<td>
This dataset contains information about the current status of the platooning
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CSV
</td> </tr>
<tr>
<td>
**AUTOPILOT_BrainPort_Platooning_PositioningSystem**
</td> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
Data from GPS on the vehicle
</td> </tr>
<tr>
<td>
Dataset
Description
</td>
<td>
This dataset contains speed, longitude, latitude, heading from the GPS
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CSV
</td> </tr>
<tr>
<td>
**AUTOPILOT_BrainPort_Platooning_PositioningSystemResample**
</td> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
Data from GPS on the vehicle
</td> </tr>
<tr>
<td>
Dataset
Description
</td>
<td>
This dataset contains speed,longitude,latitude,heading from the GPS, resampled
to 100 milliseconds
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CSV
</td> </tr>
<tr>
<td>
**AUTOPILOT_BrainPort_Platooning_PSInfo**
</td> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
Data sent by PlatoonService to the vehicle
</td> </tr>
<tr>
<td>
Dataset
Description
</td>
<td>
This dataset contains speed and route information for the vehicle to create a
platoon
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CSV
</td> </tr>
<tr>
<td>
**AUTOPILOT_BrainPort_Platooning_Target**
</td> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
Data from sensors on the vehicle
</td> </tr>
<tr>
<td>
Dataset
Description
</td>
<td>
Target detection in the vicinity of the host vehicle, by a vehicle sensor or
virtual sensor
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CSV
</td> </tr>
<tr>
<td>
**AUTOPILOT_BrainPort_Platooning_Vehicle**
</td> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
Data from the CAN and sensors about the state of the vehicle
</td> </tr>
<tr>
<td>
Dataset
Description
</td>
<td>
This dataset contains a.o temperature and battery state of the vehicles
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CSV
</td> </tr>
<tr>
<td>
**AUTOPILOT_BrainPort_Platooning_VehicleDynamics**
</td> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
Data from the CAN and sensors about the state of the vehicle
</td> </tr>
<tr>
<td>
Dataset
Description
</td>
<td>
This dataset contains a.o accelerations and speedlimit of the vehicle, as
observed from the CAN and the external sensors
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CSV
</td> </tr>
<tr>
<td>
**AUTOPILOT_BrainPort_Platooning_VehicleDynamics**
</td> </tr>
<tr>
<td>
Dataset
</td>
<td>
Data from the CAN and sensors about the state of the vehicle
</td> </tr>
<tr>
<td>
Nature
</td>
<td>
</td> </tr>
<tr>
<td>
Dataset
Description
</td>
<td>
This dataset contains a.o accelerations and speedlimit of the vehicle, as
observed from the CAN and the external sensors
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CSV
</td> </tr> </table>
### Automated valet parking
**Table 20 – Brainport automated valet parking datasets description**
<table>
<tr>
<th>
**AUTOPILOT_BrainPort_AutomatedValetParking_DriverVehicleInteraction**
</th> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
Data extracted from the CAN of the vehicle
</td> </tr>
<tr>
<td>
Dataset
Description
</td>
<td>
This dataset contains e.g. throttlestatus, clutchstatus, brakestatus,
brakeforce, wipersstatus, steeringwheel for the vehicle
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CSV
</td> </tr>
<tr>
<td>
**AUTOPILOT_BrainPort_AutomatedValetParking_DroneAvpCommand**
</td> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
Data sent from drone
</td> </tr>
<tr>
<td>
Dataset
Description
</td>
<td>
This dataset contains route information for a vehicle to a designated parking
spot
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CSV
</td> </tr>
<tr>
<td>
**AUTOPILOT_BrainPort_AutomatedValetParking_EnvironmentSensorsAbsolute**
</td> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
Data extracted from the vehicle environment sensors
</td> </tr>
<tr>
<td>
Dataset
Description
</td>
<td>
This dataset contains information about detected object, with absolute
coordinates
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CSV
</td> </tr>
<tr>
<td>
**AUTOPILOT_BrainPort_AutomatedValetParking_EnvironmentSensorsRelative**
</td> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
Data extracted from the vehicle environment sensors
</td> </tr>
<tr>
<td>
Dataset
Description
</td>
<td>
This dataset contains information about detected object, with relative
coordinates
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CSV
</td> </tr>
<tr>
<td>
**AUTOPILOT_BrainPort_AutomatedValetParking_IotVehicleMessage**
</td> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
Data sent between all devices, vehicles and services
</td> </tr>
<tr>
<td>
Dataset
Description
</td>
<td>
Each sensor data submission is a Message. A Message has an Envelope, a Path,
and optionally (but likely) Path Events and optionally Path Media. The
envelope bears fundamental information about the individual sender (the
vehicle) but not to a level that owner of the vehicle can be identified or
different messages can be identified that originate from a single vehicle.
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CSV
</td> </tr>
<tr>
<td>
**AUTOPILOT_BrainPort_AutomatedValetParking_ParkingSpotDetection**
</td> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
Data sent from drone to parkingService
</td> </tr>
<tr>
<td>
Dataset
Description
</td>
<td>
This dataset contains informaton about detected parking spots
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CSV
</td> </tr>
<tr>
<td>
**AUTOPILOT_BrainPort_AutomatedValetParking_PositioningSystem**
</td> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
Data from GPS on the vehicle
</td> </tr>
<tr>
<td>
Dataset
Description
</td>
<td>
This dataset contains speed, longitude, latitude, heading from the GPS
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CSV
</td> </tr>
<tr>
<td>
**AUTOPILOT_BrainPort_AutomatedValetParking_PositioningSystemResampled**
</td> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
Data from GPS on the vehicle
</td> </tr>
<tr>
<td>
Dataset
Description
</td>
<td>
This dataset contains speed,longitude,latitude,heading from the GPS, resampled
to 100 milliseconds
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CSV
</td> </tr>
<tr>
<td>
**AUTOPILOT_BrainPort_AutomatedValetParking_Vehicle**
</td> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
Data from the CAN and sensors about the state of the vehicle
</td> </tr>
<tr>
<td>
Dataset
Description
</td>
<td>
This dataset contains a.o temperature and battery state of the vehicles
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CSV
</td> </tr>
<tr>
<td>
**AUTOPILOT_BrainPort_AutomatedValetParking_VehicleAvpCommand**
</td> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
Data sent from ParkingService to vehicle
</td> </tr>
<tr>
<td>
Dataset
Description
</td>
<td>
This dataset contains route to parkingspot, and some other environmental
information
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CSV
</td> </tr>
<tr>
<td>
**AUTOPILOT_BrainPort_AutomatedValetParking_VehicleAvpStatus**
</td> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
Data sent from vehicle to ParkingService
</td> </tr>
<tr>
<td>
Dataset
Description
</td>
<td>
This dataset contains information about the current status and parkingstatus
of the vehicle
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CSV
</td> </tr>
<tr>
<td>
**AUTOPILOT_BrainPort_AutomatedValetParking_VehicleDynamics**
</td> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
Data from the CAN and sensors about the state of the vehicle
</td> </tr>
<tr>
<td>
Dataset
Description
</td>
<td>
This dataset contains a.o accelerations and speedlimit of the vehicle, as
observed from the CAN and the external sensors
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CSV
</td> </tr> </table>
### Highway pilot
**Table 21 – Brainport highway pilot datasets description**
<table>
<tr>
<th>
**AUTOPILOT_BrainPort_HighwayPilot_AdasCommand**
</th> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
Data from the automated driver assistance system
</td> </tr>
<tr>
<td>
Dataset
</td>
<td>
This dataset contains the ADAS command in the vehicle
</td> </tr> </table>
<table>
<tr>
<th>
Description
</th>
<th>
</th> </tr>
<tr>
<td>
File format
</td>
<td>
CSV
</td> </tr>
<tr>
<td>
**AUTOPILOT_BrainPort_HighwayPilot_Anomaly**
</td> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
Data sent from detecting vehicle to the service, and from service to vehicles
</td> </tr>
<tr>
<td>
Dataset
Description
</td>
<td>
This dataset contains information about all the detected anomalies on the road
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CSV
</td> </tr>
<tr>
<td>
**AUTOPILOT_BrainPort_HighwayPilot_AnomalyImage**
</td> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
Data sent from detecting vehicle to service
</td> </tr>
<tr>
<td>
Dataset
Description
</td>
<td>
This dataset contains images of the detected anomalies
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CSV
</td> </tr>
<tr>
<td>
**AUTOPILOT_BrainPort_HighwayPilot_DriverVehicleInteraction**
</td> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
Data extracted from the CAN of the vehicle
</td> </tr>
<tr>
<td>
Dataset
Description
</td>
<td>
This dataset contains e.g.
throttlestatus,clutchstatus,brakestatus,brakeforce,wipersstatus,steeringwheel
for the vehicle
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CSV
</td> </tr>
<tr>
<td>
**AUTOPILOT_BrainPort_HighwayPilot_EnvironmentSensorsAbsolute**
</td> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
Data extracted from the vehicle environment sensors
</td> </tr>
<tr>
<td>
Dataset
Description
</td>
<td>
This dataset contains information about detected object, with absolute
coordinates
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CSV
</td> </tr>
<tr>
<td>
**AUTOPILOT_BrainPort_HighwayPilot_EnvironmentSensorsRelative**
</td> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
Data extracted from the vehicle environment sensors
</td> </tr>
<tr>
<td>
Dataset
Description
</td>
<td>
This dataset contains information about detected object, with relative
coordinates
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CSV
</td> </tr>
<tr>
<td>
**AUTOPILOT_BrainPort_HighwayPilot_Hazard**
</td> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
Data sent from service to vehicle
</td> </tr>
<tr>
<td>
Dataset
Description
</td>
<td>
This dataset contains specific information for a vehicle about anomalies and
hazards
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CSV
</td> </tr>
<tr>
<td>
**AUTOPILOT_BrainPort_HighwayPilot_IotVehicleMessage**
</td> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
Data sent between all devices, vehicles and services
</td> </tr>
<tr>
<td>
Dataset
Description
</td>
<td>
Each sensor data submission is a Message. A Message has an Envelope, a Path,
and optionally (but likely) Path Events and optionally Path Media. The
envelope bears fundamental information about the individual sender (the
</td> </tr>
<tr>
<td>
</td>
<td>
vehicle) but not to a level that owner of the vehicle can be identified or
different messages can be identified that originate from a single vehicle.
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CSV
</td> </tr>
<tr>
<td>
**AUTOPILOT_BrainPort_HighwayPilot_PositioningSystem**
</td> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
Data from GPS on the vehicle
</td> </tr>
<tr>
<td>
Dataset
Description
</td>
<td>
This dataset contains speed, longitude, latitude, heading from the GPS
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CSV
</td> </tr>
<tr>
<td>
**AUTOPILOT_BrainPort_HighwayPilot_Vehicle**
</td> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
Data from the CAN and sensors about the state of the vehicle
</td> </tr>
<tr>
<td>
Dataset
Description
</td>
<td>
This dataset contains a.o temperature and battery state of the vehicles
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CSV
</td> </tr>
<tr>
<td>
**AUTOPILOT_BrainPort_HighwayPilot_VehicleDynamics**
</td> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
Data from the CAN and sensors about the state of the vehicle
</td> </tr>
<tr>
<td>
Dataset
Description
</td>
<td>
This dataset contains a.o accelerations and speedlimit of the vehicle, as
observed from the CAN and the external sensors
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CSV
</td> </tr> </table>
### Urban driving
**Table 22 – Brainport urban driving datasets description**
<table>
<tr>
<th>
**AUTOPILOT_BrainPort_ UrbanDriving_DriverVehicleInteraction**
</th> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
Data extracted from the CAN of the vehicle
</td> </tr>
<tr>
<td>
Dataset
Description
</td>
<td>
This dataset contains e.g. throttlestatus, clutchstatus, brakestatus,
brakeforce, wipersstatus, steeringwheel for the vehicle
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CSV
</td> </tr>
<tr>
<td>
**AUTOPILOT_BrainPort_ UrbanDriving_EAI2Mobile**
</td> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
Data from the service to the mobile
</td> </tr>
<tr>
<td>
Dataset
Description
</td>
<td>
This dataset contains information sent to the mobile about the Estimated
Arrival time and position
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CSV
</td> </tr>
<tr>
<td>
**AUTOPILOT_BrainPort_ UrbanDriving_EnvironmentSensorsAbsolute**
</td> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
Data extracted from the vehicle environment sensors
</td> </tr>
<tr>
<td>
Dataset
Description
</td>
<td>
This dataset contains information about detected object, with absolute
coordinates
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CSV
</td> </tr>
<tr>
<td>
**AUTOPILOT_BrainPort_ UrbanDriving_EnvironmentSensorsRelative**
</td> </tr>
<tr>
<td>
Dataset
</td>
<td>
Data extracted from the vehicle environment sensors
</td> </tr> </table>
<table>
<tr>
<th>
Nature
</th>
<th>
</th> </tr>
<tr>
<td>
Dataset
Description
</td>
<td>
This dataset contains information about detected object, with relative
coordinates
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CSV
</td> </tr>
<tr>
<td>
**AUTOPILOT_BrainPort_ UrbanDriving_IOT_CEMA_Message**
</td> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
Data from the service to the vehicle
</td> </tr>
<tr>
<td>
Dataset
Description
</td>
<td>
This dataset contains information from the Crowd Estimation and Mobility
Analytics service
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CSV
</td> </tr>
<tr>
<td>
**AUTOPILOT_BrainPort_ UrbanDriving_IOT_FlowRadar_Message**
</td> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
Data from the vehicle to the service
</td> </tr>
<tr>
<td>
Dataset
Description
</td>
<td>
This dataset contains the GPS informaton (speed,position,heading) from the
vehicle
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CSV
</td> </tr>
<tr>
<td>
**AUTOPILOT_BrainPort_ UrbanDriving_IotVehicleMessage**
</td> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
Data sent between all devices, vehicles and services
</td> </tr>
<tr>
<td>
Dataset
Description
</td>
<td>
Each sensor data submission is a Message. A Message has an Envelope, a Path,
and optionally (but likely) Path Events and optionally Path Media. The
envelope bears fundamental information about the individual sender (the
vehicle) but not to a level that owner of the vehicle can be identified or
different messages can be identified that originate from a single vehicle.
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CSV
</td> </tr>
<tr>
<td>
**AUTOPILOT_BrainPort_ UrbanDriving_IOT_VehicleStatus**
</td> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
Data sent from the vehicle to the service
</td> </tr>
<tr>
<td>
Dataset
Description
</td>
<td>
This dataset contains the current status of the vehicle
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CSV
</td> </tr>
<tr>
<td>
**AUTOPILOT_BrainPort_ UrbanDriving_PositioningSystem**
</td> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
Data from GPS on the vehicle
</td> </tr>
<tr>
<td>
Dataset
Description
</td>
<td>
This dataset contains speed,longitude,latitude,heading from the GPS
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CSV
</td> </tr>
<tr>
<td>
**AUTOPILOT_BrainPort_ UrbanDriving_SmartphoneGPS**
</td> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
Data sent by the mobile to the service
</td> </tr>
<tr>
<td>
Dataset
Description
</td>
<td>
This dataset contains the GPS informaton (speed,position,heading) from the
mobile
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CSV
</td> </tr>
<tr>
<td>
**AUTOPILOT_BrainPort_ UrbanDriving_SmartphoneStatus**
</td> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
Data sent from the mobile to the service
</td> </tr>
<tr>
<td>
Dataset
Description
</td>
<td>
This dataset contains the current status of the mobile
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CSV
</td> </tr>
<tr>
<td>
**AUTOPILOT_BrainPort_ UrbanDriving_TaxiRequest**
</td> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
Data sent from the mobile to the service
</td> </tr>
<tr>
<td>
Dataset
Description
</td>
<td>
This dataset contains the requests for a taxi from the mobile phones
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CSV
</td> </tr>
<tr>
<td>
**AUTOPILOT_BrainPort_ UrbanDriving_Vehicle**
</td> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
Data from the CAN and sensors about the state of the vehicle
</td> </tr>
<tr>
<td>
Dataset
Description
</td>
<td>
This dataset contains a.o temperature and battery state of the vehicles
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CSV
</td> </tr>
<tr>
<td>
**AUTOPILOT_BrainPort_ UrbanDriving_VehicleDynamics**
</td> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
Data from the CAN and sensors about the state of the vehicle
</td> </tr>
<tr>
<td>
Dataset
Description
</td>
<td>
This dataset contains a.o accelerations and speedlimit of the vehicle, as
observed from the CAN and the external sensors
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CSV
</td> </tr> </table>
## Livorno datasets
### Urban driving
**Table 23 – Livorno urban driving datasets description**
<table>
<tr>
<th>
**AUTOPILOT_Livorno_UrbanDriving_Vehicle_all**
</th> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
Data generated from the vehicle sensors
</td> </tr>
<tr>
<td>
Dataset
Description
</td>
<td>
This dataset refers to the vehicle datasets generated from the vehicle sensors
during Platooning at Versailles. This includes the data coming from the CAN
bus and GPS.
It includes following kind of dataset:
Vehicle: general data (speed, battery)
PositioningSystem: data from GPS
VehicleDynamics: data about dynamic (acceleration...)
LateralControl: steering and lane control data
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CSV
</td> </tr>
<tr>
<td>
**AUTOPILOT_Livorno_UrbanDriving_V2X_all**
</td> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
V2V messages during platooning sessions
</td> </tr>
<tr>
<td>
Dataset
Description
</td>
<td>
This dataset refers to the V2V messages exchanged between ITS stations
(vehicles and RSUs) during the Urban Drining in Livorno.
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CSV
</td> </tr>
<tr>
<td>
**AUTOPILOT_Livorno_UrbanDriving_IoT_all**
</td> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
Data extracted from IoT oneM2M platform
</td> </tr>
<tr>
<td>
Dataset
Description
</td>
<td>
This dataset refers to messages exchanged by Urban Driving devices,
applications and services across the oneM2M platform.
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CSV
</td> </tr> </table>
### Highway pilot
**Table 24 – Livorno highway pilot datasets description**
<table>
<tr>
<th>
**AUTOPILOT_Livorno_HighwayPilot_Vehicle_all**
</th> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
Data generated from the vehicle sensors
</td> </tr>
<tr>
<td>
Dataset
Description
</td>
<td>
This dataset refers to the vehicle datasets generated from the vehicle sensors
during Platooning at Versailles. This includes the data coming from the CAN
bus and GPS.
It includes following kind of dataset:
Vehicle: general data (speed, battery)
PositioningSystem: data from GPS
VehicleDynamics: data about dynamic (acceleration...)
LateralControl: steering and lane control data
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CSV
</td> </tr>
<tr>
<td>
**AUTOPILOT_Livorno_HighwayPilot_V2X_all**
</td> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
V2V messages during platooning sessions
</td> </tr>
<tr>
<td>
Dataset
Description
</td>
<td>
This dataset refers to the V2V messages exchanged between ITS stations
(vehicles and RSUs) during the Highway Piloting in Livorno.
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CSV
</td> </tr>
<tr>
<td>
**AUTOPILOT_Livorno_HighwayPilot_IoT_all**
</td> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
Data extracted from IoT oneM2M platform
</td> </tr>
<tr>
<td>
Dataset
Description
</td>
<td>
This dataset refers to messages exchanged by HighwayPilot devices,
applications and services across the oneM2M platform.
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CSV
</td> </tr> </table>
## Versailles datasets
### Platooning
**Table 25 – Versailles platooning datasets description**
<table>
<tr>
<th>
**AUTOPILOT_Versailles_Platooning_Vehicle**
</th> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
Data generated from the vehicle sensors
</td> </tr>
<tr>
<td>
Dataset
Description
</td>
<td>
Vehicle datasets generated by the vehicle sensors during platooning at
Versailles. This includes the data coming from the CAN bus and GPS. It
includes following kind of datasets:
Vehicle: general data (speed, battery)
PositioningSystem: data from GPS
</td> </tr>
<tr>
<td>
</td>
<td>
VehicleDynamics: data about dynamic (acceleration...) LateralControl: steering
and lane control data
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CSV
</td> </tr>
<tr>
<td>
**AUTOPILOT_Versailles_Platooning_V2X**
</td> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
V2X messages during platooning sessions
</td> </tr>
<tr>
<td>
Dataset
Description
</td>
<td>
This dataset refers to the V2V messages exchanged between the vehicles during
the platooning at Versailles.
TCPwlan logs contain mainly the identification of sender and receiver and the
payload extracted from TCP messages captured on the CAN bus.
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CSV
</td> </tr>
<tr>
<td>
**AUTOPILOT_Versailles_Platooning_IoT**
</td> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
Data extracted from IoT oneM2M platform
</td> </tr>
<tr>
<td>
Dataset
Description
</td>
<td>
This dataset refers to messages exchanged by platooning application and
services, Traffic Light Assist service, traffic light controllers across
oneM2M platform.
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CSV
</td> </tr> </table>
### Urban driving
**Table 26 – Versailles urban driving datasets description**
<table>
<tr>
<th>
**AUTOPILOT_Versailles_UrbanDriving_Vehicle**
</th> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
Data generated from the vehicle sensors
</td> </tr>
<tr>
<td>
Dataset
Description
</td>
<td>
Vehicle datasets generated by the vehicle sensors during urban driving at
Versailles. This includes the data coming from the CAN bus and GPS. It
includes following kind of datasets:
Vehicle: general data (speed, battery)
PositioningSystem: data from GPS
VehicleDynamics: data about dynamic (acceleration...)
Accel: acceleration data
EnvironmentSensorsAbsolute: environment sensors in absolute coordinates
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CSV
</td> </tr>
<tr>
<td>
**AUTOPILOT_Versailles_UrbanDriving_V2X**
</td> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
V2X messages during urban driving sessions
</td> </tr>
<tr>
<td>
Dataset
Description
</td>
<td>
Data exchanged with other vehicles and pedestrian during urban driving.
SortOfCam: messages sent or received from bicycles
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CSV
</td> </tr>
<tr>
<td>
**AUTOPILOT_Versailles_UrbanDriving_IoT**
</td> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
Data extracted from IoT oneM2M platform
</td> </tr>
<tr>
<td>
Dataset
Description
</td>
<td>
This dataset refers to messages exchanged by urban driving and car sharing
application with vehicle, across oneM2M platform. oneM2M: car sharing status
data
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CSV
</td> </tr>
<tr>
<td>
**AUTOPILOT_Versailles_UrbanDriving_CAM**
</td> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
CAM messages
</td> </tr>
<tr>
<td>
Dataset
Description
</td>
<td>
This dataset refers to messages captured inside the vehicle during car
sharing.
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CSV
</td> </tr> </table>
## Vigo datasets
### Automated valet parking
**Table 27 – Vigo automated valet parking datasets description**
<table>
<tr>
<th>
**AUTOPILOT_Vigo_Automated_Valet_Parking_Vehicle_all**
</th> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
Data generated from the vehicle sensors
</td> </tr>
<tr>
<td>
Dataset
Description
</td>
<td>
Datasets generated from the vehicle sensors
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CSV
</td> </tr>
<tr>
<td>
**AUTOPILOT_Vigo_Automated_Valet_Parking_V2X_all**
</td> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
V2X messages during avp sessions
</td> </tr>
<tr>
<td>
Dataset
Description
</td>
<td>
This dataset refer to the V2X messages generated from the communication
between the vehicle and the infrastructure.
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CSV
</td> </tr>
<tr>
<td>
**AUTOPILOT_Vigo_Automated_Valet_Parking_IoT_all**
</td> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
Data extracted from IoT oneM2M platform
</td> </tr>
<tr>
<td>
Dataset
Description
</td>
<td>
This dataset refer to the IOT datasets generated from IOT devices
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CSV
</td> </tr> </table>
### Urban driving
**Table 28 – Vigo urban driving datasets description**
<table>
<tr>
<th>
**AUTOPILOT_Vigo_Automated_Urban_Driving_Vehicle_all**
</th> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
Data generated from the vehicle sensors
</td> </tr>
<tr>
<td>
Dataset
Description
</td>
<td>
Datasets generated from the vehicle sensors
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CSV
</td> </tr>
<tr>
<td>
**AUTOPILOT_Vigo_Automated_Urban_Driving_V2X_all**
</td> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
V2X messages during urban driving sessions
</td> </tr>
<tr>
<td>
Dataset
Description
</td>
<td>
This dataset refer to the V2X messages generated from the communication
between the vehicle, others vehicles and the infrastructure.
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CSV
</td> </tr>
<tr>
<td>
**AUTOPILOT_Vigo_Automated_Urban_Driving_IoT_all**
</td> </tr>
<tr>
<td>
Dataset Nature
</td>
<td>
Data extracted from IoT oneM2M platform
</td> </tr>
<tr>
<td>
Dataset
Description
</td>
<td>
This dataset refer to the IOT datasets generated from IOT devices
</td> </tr>
<tr>
<td>
File format
</td>
<td>
CSV
</td> </tr> </table>
# FAIR data management principles
The data that is generated during and after the project should be **FAIR** 9
, that is Findable, Accessible, Interoperable and Reusable. These requirements
do not affect implementation choices and don’t necessarily suggest any
specific technology, standard, or implementation solution.
The FAIR principles were generated to improve the practices for data
management and datacuration, and FAIR aims to describe the principles in order
to be applied to a wide range of data management purposes, whether it is data
collection or data management of larger research projects regardless of
scientific disciplines.
With the endorsement of the FAIR principles by H2020 and their implementation
in the guidelines for H2020, The FAIR principles serve as a template for
lifecycle data management and ensure that the most important components for
lifecycle are covered.
This is intended as an implementation of the FAIR concept rather than a strict
technical implementation of the FAIR principles. AUTOPILOT project has
implemented several actions described below to carry on the FAIR principles.
## Making data findable, including provisions for metadata
* The data sets will have very rich metadata to facilitate the findability. In particular for IOT data, the metadata are based on the OneM2M standard.
* All the data sets will have a Digital Object Identifiers provided by the public repository (ZENODO)
* The reference used for the data set will follow this format : **AUTOPILOT_PS_UC_Datatype_XX**
* The standards for metadata are defined in the chapter 5 tables and explained in section 2.2.
## Making data openly accessible
* All the data sets that are openly available are described in the chapter 5.
* The data sets for evaluation is accessible via AUTOPILOT centralized test server.
* The data sets is made available using public repository (e.g. ZENODO) after the project
* The data sharing in chapter 5 explains the methods or software used to access the data. Basically, no software is needed to access the data
* The data and their associated metadata is deposed in a public repository or either in an institutional repository.
* The data sharing in the section 4 will outline the rules to access the data if restrictions exist
## Making data interoperable
* The metadata vocabularies, standards and methodologies will depend on the public repository and are mentioned in the chapter 5 0tables.
* The AUTOPILOT WP2 made several actions in order to define common data formats.
This work was developed in task 2.1 for vehicle data and task 2.3 for Iot
data. The
goal is to have the same structure across pilot sites and enable evaluators
dealing with the same format for all pilot sites.
* AUTOPILOT PS use IOT platforms based on OneM2M standards to enable data interoperability across pilot sites.
## Increase data re-use (through clarifying licenses)
* All the data producers will license their data to allow the widest reuse possible.
* By default, the data is made available for reuse. If any constrains exist, an embargo period is mentioned in the section 4 tables to keep the data for only a period of time
* The data producers will make their data for third-parties within public repositories. They is reused for the scientific publications validation purpose
# Responsibilities
In order to face the data management challenges efficiently, All AUTOPILOT
partners have to respect the policies set out in this DMP and datasets have to
be created, managed and stored appropriately.
The Data controller role within AUTOPILOT is undertaken by Francois Fischer
(ERTICO) who will directly report to the AUTOPILOT Ethics Board. The Data
controller acts as the point of contact for Data Protection issue and will
coordinate the actions required to liaise between different beneficiaries and
their affiliates, as well as their respective Data Protection agencies, in
order to ensure that data collection and processing within the scope of
AUTOPILOT, is carried out according to EU and national legislation. Regarding
the ORDP, the data controller must ensure that data are shared and easily
available.
Each data producer and WPL is responsible for the integrity and compatibility
of its data during the project lifetime. The data producer is responsible for
sharing its datasets through open access repositories. He is in charge of
providing the latest version.
Regarding ethical issues, the deliverable D7.1 details all the measures that
AUTOPILOT will use to comply with the H2020 Ethics requirements.
The Data Manager role within AUTOPILOT will directly report to the Technical
Meeting Team (TMT). The Data Manager will coordinate the actions related to
data management and in particular the compliance to Open Research Data Pilot
guideline. The data manager is responsible for implementing the data
management plan and he ensures it is reviewed and revised.
# Ethical issues and legal compliance
As explained in Chapter 2, the central IoT platform is a cloud platform that
is hosted on IBM infrastructure, and maintained by IBM IE. It will integrate
and aggregate data from the various vehicles and pilot sites.
All data transfers to the IBM hosted central IoT Platform are subject to and
conditional upon compliance with the following requirements:
* Prior to any transfer of data to the IBM hosted central IoT platform, all partners must execute an agreement as provided for in Attachment 6 of Autopilot Collaboration Agreement.
* All the partners must agree to commit not to provide personal data to the central IoT platform and to represent that it secures all necessary authorizations & consents before sharing data or any other type of information (“Background, Results, Confidential Information and/or any data”) with other parties.
* Every partner that needs to send and store data in the central IoT platform has to request access to the servers, and inform IBM IE what type of data they will send.
* IBM IE will review all data sources BEFORE approving them and allowing them into the central IoT platform, to ensure they are transformed into data that cannot be traced back to personal information.
* No raw videos/images or private information can be sent to the central IoT platform. The partners who will send data to the platform must anonymize data first. Only anonymized information that is extracted from the raw images/videos (e.g., distance between cars, presence of pedestrians, etc.) is accepted and stored.
* The central IoT platform will only be made available to the consortium partners, and not to external entities.
* IBM IE reserves the right to suspend partner’s access in case of any suspicious activities detected or non-compliant data received. IBM IE may re-grant access to the platform if a solution demonstrating how to prevent such sharing of personal data and sensitive personal data is reached and implemented.
* IBM IE may implement validation procedures to check that the submitted data structures and types are compliant with what the partners promised to send to the central IoT platform.
* All the data is deleted at the end of the project from all servers of the central IoT platform.
# Conclusion
This deliverable provides an overview of the data that AUTOPILOT project
produced together with related data processes and requirements taken into
consideration.
The document outlines an overview about the data set types with detailed
description and explains the processes followed for test sites and evaluation
within high level representations.
The chapter 5, which describes the data sets, has been enriched with the last
progress of the project comparing to the previous version of the DMP (D6.9).
This includes detailed description of the standards, methodologies, sharing
policies and storage methods.
This final version of the Data Management Plan provides all the details
concerning the datasets. These dataset are the results of the test sessions
performed at pilot sites level. However, some additional data may be provided
later as soon as the partners agree on the data to be shared.
Work on data preparation and data processing are ongoing and will enable help
increasing the amount of data to be shared in public repository after the
project according to ORDP initiative.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1227_CloudPerfect_732258.md
|
EXECUTIVE SUMMARY
It is well known that modern research builds on extensive scientific dialogue
and advances by improving earlier work made public. Following the FAIR Data
Principles: Findable, Accessible, Interoperable, and Re-Usable is to pave the
way towards this over-arching objective.
The purpose of this document is to deliver the first version of the Data
Management Plan. FAIR Principles have been applied
The approach taken for this document was to consult all key documents made
available by the EU to create a Data Management Plan: H2020 Programme
Guidelines to the Rules on Open Access to Scientific Publications and Open
Access to Research Data in Horizon 2020 (V3.2 21 March 2017) ,
H2020 AGA — Annotated Model Grant Agreement (V2.2 25 November 2016) ,
Guidelines on FAIR Data Management in Horizon 2020 (V3.0- 26 July 2016) along
with ANNEX 1- Horizon 2020 FAIR DMP template and issues highlighted to be
covered in the DMP, and lastly the online tool, DMP Online which was suggested
in the section “Further support in developing your DMP.”
This is a living document and therefore as information is made available the
document will be updated to reflection a finer level of granularity.
# Introduction
CloudPerfect is a part of the Open Research Data Pilot (ORD pilot) which aims
to encourage good data management, and maximise access to and re-use of
research data generated by Horizon 2020 projects. As part of this ORD pilot,
the objective of this document is to deliver an initial Data Management Plan,
from here forward referred to as DMP.
The DMP intends to ensure FAIR data management, meaning research data should
be Findable, Accessible, Interoperable, and Re-useable. The principles have
guided the DMP, to its ultimate goal of managing research data encourage
knowledge discovery and innovation and therefore data and knowledge
integration and re-use.
This document is delivered under WP 1- Project & Innovation Management and
Technical Coordination. The DMP will be delivered also in a second version,
D1.4 Final Data Management Plan, at project month 22\.
The document begins with a Chapter 2 – Background. This Chapter describes the
documents objective, scope and approach.
Chapter 3- Data Summary, addresses what data is collected or created, what the
utility of this data may be for others, and its purpose in relation to the
project objectives, what data will be opened or closed (with rationale).
Chapter 4- Responsibilities and Resources addresses responsibilities regarding
data management, questions about resources/costs for making the data FAIR, and
aspects of data quality assurance.
Furthermore, Chapter 5- Ethics and Legal Compliance addresses how the project
would manage any ethical issues, how copyright and Intellectual Property
Rights (IPR) issues will be managed. Moreover, other questions addressed: who
will own the copyright and IPR of any existing data as well as new data that
will be generated, and an outline of any restrictions needed on data sharing.
Chapter 6- Data Set Information will address the following areas for each type
of Data-Set: data collection, documentation and metadata, data sharing and re-
use, storage and backup, preservation and data security.
To conclude, the final Chapter- Next steps addresses the approach taken to
keep the Data Management updated and reflect information made available as the
project progresses.
# Background
## Objective
The specific objective of this document is to deliver a DMP.
The CloudPerfect intends to fulfil the requirements of the Open Research Data
Pilot by following the steps below:
1. Creating a DMP by closely following all guidelines, templates and suggestions provided by the EC (see Section 2.3- Approach). This document represents the initial version. A second version will be delivered at project month 22.
2. Deposit research data in a research data repository 1 . The CloudPerfect project KPI, as far as DPM is concerned, is to deposit data in at least 3 repositories, aim for >200 downloads, and communicate this to wider scientific communities.
3. As for as possible, take measures to enable third parties to access, mine, exploit, reproduce and disseminate (free of charge for any user) the research data. Also, provide information/instruments themselves via the chosen repository about the tools available to the beneficiaries that are needed to validate the results, such as specialised software, algorithms and analysis protocols.
## Scope
The project understands that the DMP does not concern publications, only
research data. The below figure, taken from the H2020 Programme Guidelines on
the Rules on Open Access to Scientific Publications and Open Access to
Research Data in Horizon 2020, gives an overall picture.
_Figure 1- Open access to scientific publication and research data in the
wider context of dissemination and exploitation_
As of the H2020 Programme AGA – Annotated Model Grant Agreement 2 , the
types of research data are:
1. The 'underlying data' (the data needed to validate the results presented in scientific publications), including the associated metadata (i.e. metadata describing the research data deposited)
2. Any other data (for instance curated data not directly attributable to a publication, or raw data), including the associated metadata, as specified in the DMP – that is, according to the individual judgement by each project/grantee. For example: Examples: curated data not directly attributable to a publication or raw data.
The below section describes the approach taken for the creation of
CloudPerfect DMP.
## Approach
The approach taken for the creation of the DMP was to consult key documents:
H2020 Programme Guidelines to the Rules on Open Access to Scientific
Publications and Open Access to Research Data in Horizon 2020 (V3.2 21 March
2017) 3 , H2020 AGA — Annotated Model Grant Agreement (V2.2 25 November
2016) 4 , Guidelines on FAIR Data Management in Horizon 2020 (V3.0- 26 July
2016) 5 along with ANNEX 1- Horizon 2020 FAIR DMP template and issues
highlighted to be covered in the DMP,
2
http://ec.europa.eu/research/participants/data/ref/h2020/grants_manual/amga/h2020-
amga_en.pdf 3
http://ec.europa.eu/research/participants/data/ref/h2020/grants_manual/hi/oa_pilot/h2020-hi-
oa-pilot-guide_en.pdf 4
http://ec.europa.eu/research/participants/data/ref/h2020/grants_manual/amga/h2020-
amga_en.pdf 5
http://ec.europa.eu/research/participants/data/ref/h2020/grants_manual/hi/oa_pilot/h2020-hioa-
data-mgt_en.pdf
and lastly the online tool, DMP Online 2 which was suggested in the section
“Further support in developing your DMP.”
All resources were considered and represented in the document. All the
questions said to be answered (with a level appropriate to the project), and
issues highlighted in the templates are found in the deliverable in the
different Chapters and left in their original form.
Data Sets decided to be open were grouped by type and assigned to a project
team member who then provided answers to the questions. All answers are in
Chapter 6-Data Set Information.
The DMP is a living document. As mentioned in the Guidelines, detailed answers
to all the questions (suggested to be answered) are not required in the
initial version.
# Data Summary
This section is organized by stating the question and answer immediately
after.
**Question: What data will you collect or create? What Research Data is
open?**
**Which data are of long-term value and should be retained, shared, and/or
preserved?**
The data collected/created can be grouped in three different Data Sets. The
following data sets would be of value of the envisioned spin-off at project-
end.
1. Data Set: CloudPerfect Software
This dataset includes the source code and the binary packages of the software
produced within the scope of CloudPerfect. Namely, Cloudiator code and
Benchmark code.
2. Data Set: Monitoring and Profiling Data
This dataset provides two types of data: the one kind of data is raw
monitoring data collected from virtual or physical infrastructures including
application metrics. The second type is derived data, these data are created
from raw data through mathematical and statistical means. Namely, Cloudiator
Application Level Monitoring Data, Infrastructure Monitoring Data, Profiling
Vectors, Performance Models, Benchmark Data on Providers, and SLA data.
3. Data Set: CFD Use Case models and methods
The data comes from Phitec Ingegneria srl research activity on Computational
Fluid Dynamics (CFD) Use Cases (UC) within the CloudPerfect environment
testing activity. Namely, CFD Use case benchmark data.
**Question: What is the purpose of the data collection/generation and its
relation to the objectives of the project?**
CloudPerfect relates to ICT-06-2016 and more specifically to the
Experimentation scope and covers also a significant part of the scope on
“Cloud Computing for SMEs and Public Sector Innovation.” Therefore, the
purpose of the data collection/generation is to be able to fulfil the
objectives set part of the project scope, which are the following:
Objective 1: Enhance Cloud provider resource management practices that lead to
improved stability of offered services and QoE for end users.
Objective 2: Enhance Cloud services behaviour measurability, predictability
and auditability in order to increase operational trust, thus enabling their
use in more critical applications and minimizing procurement time for SMEs
targeting Cloud environments.
Objective 3: Enable a set of abstraction layers on top of existing tools that
will aid increased uptake of the produced artefacts and their ability to be
launched against multiple providers/services without differences.
Objective 4: Define and apply sets of metrics that are meaningful to the end
users for QoE and QoS Levels.
Objective 5: Enhance cloud-based application’s competitiveness through
minimizing the per user cost, via optimizing the resource selection process
based on application categorization and provider benchmarking on the same
categories for creation of multi-tier SLAs.
Objective 6: Pilot and demonstrate the applicability and scalability of the
tools on large scale Cloud infrastructures and for applications that typically
need specialized oversized infrastructures.
Objective 7: Enable the extension of the Cloud business model and consulting
roles.
**Question: Will existing data be reused and how?**
CloudPerfect builds on previous research results from previous and on-going
European projects, increasing the quality of results and helping increase the
progress of the market. Therefore, existing components and functionalities
that have been developed and prototyped in other projects were brought into
CloudPerfect. Here below is a description of the assets, the originating
project and the scope of extension/integration in CloudPerfect.
1. 3ALib (SLA monitoring library) from the project ARTIST 3 .
Integration with QoE DB and extension to analytics queries for identification
of key metrics, improvement of front-ends and multi-user functionality,
improved packaging and deployment process.
2. Benchmarking Suite (Cloud services benchmarking tool) from project ARTIST
Increase benchmarking scope through incorporation of other major categories
(e.g. CFD specific benchmarks, security benchmarks etc.), improved packaging
and deployment process.
3. Open Reference Model (Metrics and standards specification) from project SLALOM 8 New metrics may be added and applied on top of retrieved data.
4. CLOUDIATOR (Orchestration, Deployment and App management tool) from project PaaSAge 9
Improved packaging and deployment process, increased Modularity,
configurability and integratability, support for software updates and
continuous integration.
5. Intelligent Services (Overhead Prediction model and IRMOS & Classification tools) from project ARTIST
Extension with direct monitoring and acquisition mechanisms, deployment on a
Cloud environment, improved packaging and automation process, and integration
with Cloud management systems.
**Question: What Research Data is closed? Provide rationale for doing so.**
CloudPerfect is aware that, as stated in the H2020 AGA — Annotated Model Grant
Agreement, “beneficiaries may decide not to provide open access to specific
datasets if this would go against other GA obligations (e.g. to protect
results or personal data) or if the action’s main objective…would be
jeopardised by giving open access to those specific datasets.” Furthermore,
“This does not change the obligation to protect results in Article 27, the
confidentiality obligations in Article 36, the security obligations in Article
37 or the obligations to protect personal data in Article 39, all of which
still apply.”
No closed data sets have been listed as of the date of the submission of this
deliverable.
# Responsibilities and Resources
**Question: Who will be responsible for data management?**
The management of the data set is shared amongst the project partners whom
take part in contributing details about the datasets in the DMP deliverables,
those that take care of the deposition of research data in open research data
repositories, and overall through the Quality Manager, whom oversee the
monitoring of KPI’s, which includes: to deposit data in at least 3
repositories, aim for >200 downloads, and communicate this to wider scientific
communities.
**Question: What resources will you require to deliver your plan? Estimate the
costs for making your data FAIR. Describe how you intend to cover these costs.
Describe costs and potential value of long term preservation.**
As of the date of the submission of this deliverable, CloudPerfect estimates
no costs related to the implementation of the ORD pilot (e.g. costs for
providing open access, related research data management costs, data curation
and data storage costs.)
**Question: What is the data quality assurance process?**
CloudPerfect understands data quality to be the perception/assessment of
data's fitness to serve its purpose in a given context. The project partners
in charge of depositing research data in open research data repositories
ensure the use of the data by individuals who are not involved in the creation
of the data. This would serve as a quality check. Attention is given to
consistency across data sources and appropriate presentation.
To note, CloudPerfect has a Quality Assurance Office (QAO), which is a task
force constituted in to deal with all quality related aspects within the
project. The mandate of the QAO is to ensure that all the activities,
procedures, policies, milestones, deliverables and tools are of high quality
by continuously monitoring and assessing their status and results. The QAO
actively collaborates with all the project management structures to ensure the
quality of project outcomes (i.e. deliverables, milestones, software).
# Ethics and Legal Compliance
**Question: How will you manage any ethical issues?**
The CloudPerfect DoA declared in Section 5.1 that no ethics issues are
involved in the project.
Article 34- Ethics and Research Integrity in Grant Agreement describes the
obligation to comply with ethical principles and research integrity, and
describes the procedure related to raising an ethical issue:
Before the beginning of an activity raising an ethical issue, each beneficiary
must have obtained:
1. any ethics committee opinion required under national law and
2. any notification or authorisation for activities raising ethical issues required under national and/or European law needed for implementing the action tasks in question.
The documents must be kept on file and be submitted upon request by the
coordinator to the Commission (see Article 52). If they are not in English,
they must be submitted together with an English summary, which shows that the
action tasks in question are covered and includes the conclusions of the
committee or authority concerned (if available).
**Question: How will you manage copyright and Intellectual Property Rights
(IPR) issues?**
The policy reflects the current state of the consortium agreements on data
management. _Section 8- Results_ and _Section 9-Access Rights_ in the
Consortium Agreement give detailed explanation of this topic, and also details
on when an objection is justified and the procedure to follow:
An objection is justified if:
1. the protection of the objecting Party's Results or Background 4 would be adversely affected
2. the objecting Party's legitimate interests in relation to the Results or Background would be significantly harmed.
The objection has to include a precise request for necessary modifications.
1. If an objection has been raised the involved Parties shall discuss how to overcome the justified grounds for the objection on a timely basis (for example by amendment to the planned publication and/or by protecting information before publication) and the objecting Party shall not unreasonably continue the opposition if appropriate measures are taken following the discussion.
2. The objecting Party can request a publication delay of not more than 90 calendar days from the time it raises such an objection. After 90 calendar days the publication is permitted.
Furthermore, outlined in the Consortium Agreement, decisions regarding
intellectual property rights shall be taken by the Project Management Board.
For example: proposals for changes to Annexes 1 and 2 of the Grant Agreement
to be agreed by the Funding Authority, changes to the Consortium Plan,
modifications to Attachment 1 (Background Included) and additions to
Attachment 3 (List of Third Parties for simplified transfer according to
Section 8.3.2.)
Section- Governance structure covers voting rules and quorum, and veto rights,
where a Member which can show that its intellectual property rights or other
legitimate interests would be severely affected by a decision of a Consortium
Body may exercise a veto with respect to the corresponding decision or
relevant part of the decision.
**Who will own the copyright and IPR of any existing data as well as new data
that will be generated?**
All partners are aware that Access Rights to background have to be granted in
principle, but Parties must identify and agree amongst them on the Background
for the project. As of the date of the submission of this deliverable, no
partners have described background for which there are Specific limitations
and/or conditions for implementation (Article 25.2 Grant Agreement) or
Specific limitations and/or conditions for Exploitation (Article 25.3 Grant
Agreement.) Article 26 — Ownership of results in the Grant Agreement states
that results are owned by the beneficiary that generates them.
# Data Set Information
As mentioned in the Chapter-Approach, all resources made available by the EU
about creating a DMP are represented in this document. All the questions said
to be answered (with a level appropriate to the project) and issues
highlighted in the templates are found here and left in their original form.
Therefore, the format is question(s) and the answer(s) immediately after.
## Data Set - CFD Use Case Models and Methods
The data comes from Phitec Ingegneria srl research activity on Computational
Fluid Dynamics (CFD) Use Cases (UC) within the CloudPerfect environment
testing activity.
### Data Collection
The following questions are addressed.
What is the origin of the data?
How will the data be collected or created?
To whom might it be useful?
What is the size of the data?
What is the type(s) and format(s) of data?
#### **Response**
The CFD Use Case models and methods dataset will be based on report (pdf
format), software settings (raw text files), geometry (stl format) and scripts
for connecting different software in a complete CFD simulation or CFD based
optimization loop.
Although the precise size of the data will vary during the project, it will
stay in the scale of a few hundreds of megabytes.
The data collected can be useful to:
* researches/organization working in the field of applied CFD and optimization for products performance improvements.
* Researches/organization working in the field of High Performance Computing (HPC) or cloud computing solution testing or optimization as the data provided will give the possibility to perform computational intensive test on HPC systems
### Documentation and Metadata
The following questions and points are addressed below.
What documentation and metadata will accompany the data? Specify standards for
metadata creation (if any). If there are no standards in your discipline
describe what type of metadata will be created and how.
_Making data findable, including provisions for metadata_
Outline the discoverability of data (metadata provision).
Outline the identifiability of data and refer to standard identification
mechanism. Do you make use of persistent and unique identifiers such as
Digital Object Identifiers?
Outline naming conventions used
Outline the approach towards search keyword
Outline the approach for clear versioning
##### Response
The collected data will be accompanied by a usage manual that will explain the
dataset purpose and the basic instructions to use it. This manual can be
distributed within the distribution packages directly or available online.
Concerning metadata and discoverability, the dataset will be published on the
Zenodo 5 portal. This will guarantee:
* A unique DOI assigned to each version of the software;
* A rich set of metadata associated to the software including description, authors and maintainers, version, release date, free keywords, grant number of funding projects, links to additional location for data and documentation;
* Discoverability of metadata through the portal search functionality and programmatically using the OAI-PMH protocol;
The data will adopt the Semantic Versioning 2.0 12 scheme to assign a unique
version to each release of the data.
No particular naming conventions are in place for the software at the time of
writing.
The approach towards search keyword is not defined yet at the time of writing.
### Data Sharing and Re-use
The following questions and points are addressed.
_Making data openly accessible_
How will you share the data? Repository name(s). Specify where the data and
associated metadata, documentation and code are deposited
Are any restrictions on data sharing required? Specify how access will be
provided in case there are any restrictions.
Specify what methods or software tools are needed to access the data? Is
documentation about the software needed to access the data included? Is it
possible to include the relevant software (e.g. in open source code)?
_Making data interoperable_
Assess the interoperability of your data. Specify what data and metadata
vocabularies, standards or methodologies you will follow to facilitate
interoperability.
Specify whether you will be using standard vocabulary for all data types
present in your data set, to allow inter-disciplinary interoperability? If
not, will you provide mapping to more commonly used ontologies?
_Increase Data Re-use (through clarifying licences)_
Specify how the data will be licenced to permit the widest reuse possible
Specify when the data will be made available for re-use. If applicable,
specify why and for what period a data embargo is needed
Specify whether the data produced and/or used in the project is useable by
third parties, in particular after the end of the project? If the re-use of
some data is restricted, explain why
##### Response
The CFD Use Case models and methods dataset will be published and open
accessible through, at least, two software repositories:
* The GitHub ( _https://github.com/_ ) platform;
* The Zenodo ( _https://zenodo.org/_ ) portal;
The location of software in those repositories will be clearly linked in the
CloudPerfect website. New versions of the software will be published as soon
as they become available without applying any embargo periods.
All the formats used both for software settings and scripts are standard in
computing science and compatible with most of existing platforms. This
guarantees a high level of interoperability with other software and runtime
environments.
Open source (eg. OpenFOAM®, Paraview, FreeCAD, etc.) and closed source
software (e.g. iconCFD® Geometry tool, Beta CAE Ansa etc.) are needed to
access the data. Clear indication of what software is needed to access and
reuse every package of data will be provided together with the data
themselves.
The dataset will be released under the MIT license 6 . This will guarantee
the full possibility of reusing the data even for commercial purposes.
### Storage and Backup, Preservation and Data Security
The following questions are addressed.
How will the data be stored and backed up during the research?
How will you manage access and security?
Address data recovery as well as secure storage and transfer of sensitive data
What is the long-term preservation plan for the dataset? Specify the length of
time for which the data will remain re-usable.
##### Response
For storage, backup, preservation and security of the dataset, CloudPerfect
relies on the technology and policies provided by the two repositories GitHub
and Zenodo where the software will be made available. They both have very
strong policies for these aspects. For instance, Zenodo metadata database is
backed-up every 12 hours with one backup per-week sent to a tape storage,
while GitHub stores each repository in at least three different locations plus
an off-site backup. More details can be found at
_http://about.zenodo.org/infrastructure/_ and
_https://help.github.com/articles/github-security/_
## Data Set - Monitoring and Profiling Data
This dataset provides two types of data: the one kind of data is raw
monitoring data collected from virtual or physical infrastructures including
application metrics. The second type is derived data. It is created from raw
data through mathematical and statistical means.
### Data collection
The following questions are addressed.
What is the origin of the data?
How will the data be collected or created?
To whom might it be useful ('data utility')?
What is the size of the data?
What is the type(s) and format(s) of data?
##### Response
All data is created and processed in the project. No external data is used.
The raw monitoring data will be created through various means: application-
level data includes resource consumption time series from virtual machines.
This data set is enhanced with applicationlevel metrics. Both of them are
collected by the Cloudiator toolkit. Infrastructure monitoring data comprises
resource consumption time series from the physical infrastructure in the
testbeds. This data is collected by probes in the testbed and made available
through the cloud platform API.
The derived data will be created by interpretation of the raw data through the
tools used in CloudPerfect, mainly Cloudiator, A3lib, Benchmarking suite). It
will be created by applying the respective tools on the raw monitoring data.
Raw data is utilisable for other researchers, namely people intending to apply
new kinds of data analysis to that data. This can be with similar goals in
mind, but could also target different questions such as hardware reliability
or sizing of the hardware infrastructure.
Derived data is of interest for any kind of cloud user and/or data centre
provider that want to compare the performance of different cloud platforms.
Based on previous experience, we expect to collect about 10 GB per month of
raw monitoring data. The amount of data that is intended for publishing will
likely not exceed 10--20 GB in total.
The derived data will reside in the order of maximum several hundreds of
Megabytes.
### Documentation and Metadata
The following questions and points are addressed.
What documentation and metadata will accompany the data? Specify standards for
metadata creation (if any). If there are no standards in your discipline
describe what type of metadata will be created and how.
_Making data findable, including provisions for metadata_
Outline the discoverability of data (metadata provision).
Outline the identifiability of data and refer to standard identification
mechanism. Do you make use of persistent and unique identifiers such as
Digital Object Identifiers?
Outline naming conventions used
Outline the approach towards search keyword
Outline the approach for clear versioning
##### Response
Raw data is monitoring data collected from the testbeds and stored in
databases. While many of the 'traditional' meta-data attributes such 'author'
are not applicable for it, there are many other metadata including the origin
of the data, the interval it covers, the description of the testbed and
applications, possibly referring back to models and meta-models. We are
currently not aware of any meta-data standards for monitoring time series
besides Metrics 2.0 that we plan to follow. For derived data no such standards
exist.
In any case, CloudPerfect attempts to provide the necessary meta-data expected
by the Dublin Core standard for all data sets as a lowest common denominator
The Communication and Information Center (kiz) of UULM is running OPARU, an
(amongst others) OpenAire indexed OpenAccess server that provides support for
the publication of research data. In addition, the kiz and OMI (UULM’s
research department involved in CloudPerfect) are involved in a project that
prepare mechanisms that allow an easy and persistent publication of software
artefacts.
With respect to Monitoring and Profiling Data CloudPerfect will, whenever
possible and useful, rely on that repository for publishing their research
data. This will result in DOIs being created for any published data set or
sets of data sets.
The only hindrance we can foresee by now is that the size of some data
collections, in particular of raw data, exceed the maximum allowed size for
OPARU. If this threshold takes effect, other repositories such as Zenodo come
into play. These will be selected on a per-demand basis.
So far, no naming conventions have been decided on. Yet, it is obvious that
the main language being used for describing and naming elements will be
English.
For raw data the description of the data is of importance and the naming
conventions followed depend on the storage format. For instance, in the case
of a CSV file, the column headers will use speaking English names so the
content of each column can be easily identified.
Search Keyword- Openly accessible data sets shall be published through an
online repository and be linked to a DOI. In consequence, each of them will
also be enhanced with a title, the authors, an abstract, and a set of keywords
that describe the data. Where abstracts are not sufficient, data sets shall be
enhanced with a longer, e.g. one to two pages, descriptive text.
Clear Versioning- For raw data traces, the location of the testbed as well as
the time span covered by the traces will be used as an identifier. Where
applicable also the benchmarked application type and the imposed load will be
part of the identifier. Data sets derived from raw data will contain the id
name of the raw data and a timestamp in their name.
### Data Sharing and Re-use
The following questions and points are addressed.
_Making data openly accessible_
How will you share the data? Repository name(s). Specify where the data and
associated metadata, documentation and code are deposited
Are any restrictions on data sharing required? Specify how access will be
provided in case there are any restrictions.
Specify what methods or software tools are needed to access the data? Is
documentation about the software needed to access the data included? Is it
possible to include the relevant software (e.g. in open source code)?
_Making data interoperable_
Assess the interoperability of your data. Specify what data and metadata
vocabularies, standards or methodologies you will follow to facilitate
interoperability.
Specify whether you will be using standard vocabulary for all data types
present in your data set, to allow inter-disciplinary interoperability? If
not, will you provide mapping to more commonly used ontologies?
_Increase Data Re-use (through clarifying licences)_
Specify how the data will be licenced to permit the widest reuse possible
Specify when the data will be made available for re-use. If applicable,
specify why and for what period a data embargo is needed
Specify whether the data produced and/or used in the project is useable by
third parties, in particular after the end of the project? If the re-use of
some data is restricted, explain why
##### Response
Data to Share- We believe that it is not necessary nor required to keep all
raw data forever or even to publish all raw monitoring data. This is mainly a
consequence of the large storage requirements in relation to an expected high
similarity of the data at different phases (e.g., nights and weekends).
Instead, the project is considering storing and publishing representative
samples of the collected raw data, e.g. capturing the span of one day. We also
consider publishing thinned-out data of longer periods (e.g. a month). In any
case, such data will be published unless IPR, confidentiality, or exploitation
plans are opposed to this.
Data Location- The Communication and Information Center (kiz) of UULM is
running OPARU, an (amongst others) OpenAire indexed OpenAccess server that
provides support for the publication of research data. CloudPerfect will,
whenever possible, rely on that repository for publishing their research data.
This will result in DOIs being created for any published data set or sets of
data sets.
The only hindrance we can foresee by now is that the size of some data
collections, in particular of raw data, exceed the maximum allowed size for
OPARU. If this threshold takes effect, other repositories such as Zenodo come
into play. These will be selected on a per-demand basis.
Restrictions- It is currently not planned by the consortium to provide data
sets with restricted access. Data will either be made available as OpenAccess
or not at all.
Data Access- All data generated in CloudPerfect is intended to be used by
software and machines. While it is possible to access and manually read raw
data, it is generally not useful. Instead, it is recommended to make use of
software for that purpose. While the CloudPerfect software can be used to read
and evaluate the data, we will also ensure that the data sets and their
formats are sufficiently well documented for others to write their own
software for accessing them.
Interoperability- It is currently unclear to what extent data and metadata
vocabularies, standards, and methodologies are applicable to data sets
collected by CloudPerfect. For time series data as for the raw monitored data,
Metrics 2.0 is an option.
Vocabularies and Ontologies- It is currently unclear to what extent standard
vocabularies and commonly used ontologies are applicable to data sets
collected by CloudPerfect.
License- Whenever possible, the data sets shall be made available under a
Creative Commons "Attribution ShareAlike 4.0 International License". Where
needed more restrictive licenses such as
Creative Commons "Attribution-NonCommercial-ShareAlike 4.0 International" or
Creative Commons "Attribution-NonCommercial-NoDerivates 4.0 International"
will be used. Whenever the release of data conflicts with a partner’s
exploitation strategy, the partner may pick a more restrictive license.
Release- Data will be made available for re-use as soon as possible. This is
at the very earliest the case with the submission of the respective
deliverables. We expect that usually the release of the research data aligned
with the deliverables will take around two months’ time. Where needed project
partners are granted an embargo time of 12 months.
Third Parties- CloudPerfect targets to make produced and published data
available to third parties unless these goals are contradicted by partners'
exploitation plans: this holds particularly for monitoring and profiling data.
With respect to raw data, not all monitored data will be published due to its
mere size, but instead one or multiple representative samples will be used.
### Storage and Backup, Preservation and Data Security
The following questions are addressed.
How will the data be stored and backed up during the research?
How will you manage access and security?
Address data recovery as well as secure storage and transfer of sensitive data
What is the long-term preservation plan for the dataset? Specify the length of
time for which the data will remain re-usable.
##### Response
Raw data used during the project will be stored in modern, reliable, and
highly available distributed databases such as MongoDB, Cassandra, and HBase.
These will form the bases of all data mining performed in the project. All
other form of data including meta-information and models will be contained in
data repositories that use a multi-device ZFS storage backend. In addition,
the data is stored in a local backup every hour and in a remote backup once a
day.
So far, we are not considering using, storing, or publishing any secure or
confidential data that needs special requirements with respect to secure
storage. In particular, data that comes from public sources such as production
systems will be anonymised before even introduced to CloudPerfect tools.
Currently, there are no costs for long term storage conceivable. According to
UULM policy, data published through OPARU with a DOI will be kept for at least
ten years (recommendation of the German Research Foundation). The life-span of
other data that will be publicly accessible (e.g. through data repositories)
will be clarified with the project's exploitation plans.
Specify the length of time for which the data will remain re-usable.
No plans exist to restrict the period of time for which the data remains
usable.
## Data Set - CloudPerfect Software
This dataset includes the source code and the binary packages of the software
tools produced within the scope of CloudPerfect.
### Data collection
The following questions are addressed.
What is the origin of the data?
How will the data be collected or created?
To whom might it be useful ('data utility')?
What is the size of the data?
What is the type(s) and format(s) of data?
##### Response
CloudPerfect software is based on four distinct software prototypes inherited
from previous EU research initiatives and developed by different partners in
the consortium:
* Cloudiator from the PaaSAge project (developed by UULM);
* 3ALib from the ARTIST project (developed by ICCS);
* Benchmarking Suite from the ARTIST project (developed by ENG);
* Profiling and Classification tool from IRMOS and ARTIST projects (developed by ICCS).
Source code of these prototypes will be modified and improved during the
CloudPerfect project in order to increase their TRL level and integrate them
in homogeneous and coherent releases. Binary packages will be generated for
each release of the tools starting from the source code using automated
integration tools.
Although the precise size of the data will vary during the project, it will
stay in the scale of megabytes both for source code and binaries. Source code
consists mostly of raw files in various formats, mainly Java, Python, raw
text, XML and JSON. Binary packages will be installable packages. In order to
support multiple platforms, they will be created in different formats like
zip, rpm, deb, and jar.
The CloudPerfect software can be of interest of different stakeholders:
* Developers interested in integrating with CloudPerfect tools, inspecting the source code, fixing defects, maintaining and/or evolving it;
* Cloud Adopters interested in using the CloudPerfect tools to profile their applications;
* Cloud Providers interested in installing CloudPerfect tools to benchmark their resources and optimize the usage of their hardware resources.
### Documentation and Metadata
The following questions and points are addressed.
What documentation and metadata will accompany the data? Specify standards for
metadata creation (if any). If there are no standards in your discipline
describe what type of metadata will be created and how.
_Making data findable, including provisions for metadata_
Outline the discoverability of data (metadata provision).
Outline the identifiability of data and refer to standard identification
mechanism. Do you make use of persistent and unique identifiers such as
Digital Object Identifiers?
Outline naming conventions used
Outline the approach towards search keyword
Outline the approach for clear versioning
##### Response
The CloudPerfect software will be accompanied by a usage manual that will
explain the tool purpose and the basic instructions to use it. This manual can
be distributed within the distribution packages directly or available online.
Concerning metadata and discoverability, the software will be published on the
Zenodo 7 portal. This will guarantee:
* A unique DOI assigned to each version of the software;
* A rich set of metadata associated to the software including description, authors and maintainers, version, release date, free keywords, grant number of funding projects, links to additional location for source code, documentation and binaries;
* Discoverability of metadata through the portal search functionality and programmatically using the OAI-PMH protocol;
CloudPerfect will adopt the Semantic Versioning 2.0 15 scheme to assign a
unique version to each release of the software.
No particular naming conventions are in place for the software at the time of
writing.
The approach towards search keyword is not defined yet at the time of writing
### Data Sharing and Re-use
The following questions and points are addressed.
_Making data openly accessible_
How will you share the data? Repository name(s). Specify where the data and
associated metadata, documentation and code are deposited
Are any restrictions on data sharing required? Specify how access will be
provided in case there are any restrictions.
Specify what methods or software tools are needed to access the data? Is
documentation about the software needed to access the data included? Is it
possible to include the relevant software (e.g. in open source code)?
_Making data interoperable_
Assess the interoperability of your data. Specify what data and metadata
vocabularies, standards or methodologies you will follow to facilitate
interoperability.
Specify whether you will be using standard vocabulary for all data types
present in your data set, to allow inter-disciplinary interoperability? If
not, will you provide mapping to more commonly used ontologies?
_Increase Data Re-use (through clarifying licences)_
Specify how the data will be licenced to permit the widest reuse possible
Specify when the data will be made available for re-use. If applicable,
specify why and for what period a data embargo is needed
Specify whether the data produced and/or used in the project is useable by
third parties, in particular after the end of the project? If the re-use of
some data is restricted, explain why
##### Response
The CloudPerfect software will be published and open accessible through, at
least, two software repositories:
* The GitHub ( _https://github.com/_ ) platform;
* The Zenodo ( _https://zenodo.org/_ ) portal;
The location of software in those repositories will be clearly linked in the
CloudPerfect website. New versions of the software will be published as soon
as they become available without applying any embargo periods.
All the formats used both for source code (mainly Java, Python, JSON, XML) and
binaries (Java and Python bytecode, zip, deb, rpm) are standard in computing
science and compatible with most of existing platforms. This guarantees a high
level of interoperability with other software and runtime environments.
The CloudPerfect software will be open source and released under the Apache
License 2.0 8 . This will guarantee the full possibility of re-using the
software even for commercial purposes, under the condition of including a copy
of the license and a clear attribution.
### Storage and Backup, Preservation and Data Security
The following questions are addressed.
How will the data be stored and backed up during the research?
How will you manage access and security?
Address data recovery as well as secure storage and transfer of sensitive data
What is the long-term preservation plan for the dataset? Specify the length of
time for which the data will remain re-usable.
##### Response
For storage, backup, preservation and security of the software, CloudPerfect
relies on the technology and policies provided by the two repositories GitHub
and Zenodo where the software will be made available. They both have very
strong policies for these aspects. For instance, Zenodo metadata database is
backed-up every 12 hours with one backup per-week sent to a tape storage,
while GitHub stores each repository in at least three different locations plus
an off-site backup. More details can be found at
_http://about.zenodo.org/infrastructure/_ and
_https://help.github.com/articles/github-security/_ .
# Next steps
The DMP will be updated over the course of the project whenever significant
changes arise, for example when new data is introduced, changes in consortium
policies, changes in consortium composition and external factors. Moreover, as
the DMP is a living document, information will be made available on a finer
level of granularity through updates, also in context of periodic
evaluation/assessment of the project. A second version of this deliverable
will reflect the updates.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1229_Plan4Act_732266.md
|
**1\. Executive summary**
The purpose of this deliverable is to outline how data will be handled during
the project and after it is completed.
Plan4Act is participating on the EU Open Research data pilot. Open Research
Data Pilot aims to improve and maximise access to and re-use of research data
generated by the Project. As part of this, we deliver a Data Management Plan
(DMP).
The Data Management Plan will address how data will be collected, generated
and processed during the project, following what methodology and standards
will be used, what data will be shared and /or made openly available and how
will it be curate and preserved.
This document is intended to be the first of the two iterations of the DMP
that will be formally submitted during the project. Thus, the DMP is not a
fixed document but evolves and is more precise during the lifespan of the
project.
Page of 21
**2\. Principles of Data Management.**
# 2.1 Data Management Requirements
The current deliverable has been based on the guidelines of the EU Commission
regarding the openness of the data generated from a project that has been
funded by the H2020. According to these guidelines the scientifically-oriented
data that are going to be generated by the Activage project will be formed so
that they can be easily **discoverable** , **accessible** , **assessable** and
**intelligible** , **usable** beyond the original purpose of their collection
and usage but also **interoperable** to appropriate quality standards.
# 2.2 EU Commission Guidelines for Data Management
The EU Commission has published some guidelines for appropriate data
management plans in Horizon 2020 projects. This guide is structured as a
series of questions that should be ideally clarified for all datasets produced
in any H2020 project.
The table following on the next page presents the different aspects of the
questions raised on FAIR Data Management - template along with a comment
validating the conformance of the Plan4Act project or each work package (Table
2).
Page of 21
<table>
<tr>
<th>
**DMP**
**Component**
</th>
<th>
**Issu**
</th>
<th>
**es to be addressed**
</th> </tr>
<tr>
<td>
Data Summary
</td>
<td>
1\.
</td>
<td>
State the purpose of the data collection/ generation
</td> </tr>
<tr>
<td>
</td>
<td>
2\.
</td>
<td>
Explain the relation to the objectives of the project
</td> </tr>
<tr>
<td>
</td>
<td>
3\.
</td>
<td>
Specify the types and formats of data generated/collected
</td> </tr>
<tr>
<td>
</td>
<td>
4\.
</td>
<td>
Specify if existing data is being re-used (if any)
</td> </tr>
<tr>
<td>
</td>
<td>
5\.
</td>
<td>
Specify the origin of the data
</td> </tr>
<tr>
<td>
</td>
<td>
6\.
</td>
<td>
State the expected size of the data (if known)
</td> </tr>
<tr>
<td>
</td>
<td>
7\.
</td>
<td>
Outline the data utility: to whom will it be useful
</td> </tr>
<tr>
<td>
FAIR Data. Making data findable, including provisions for metadata.
</td>
<td>
1\. 2.
3\.
4\.
</td>
<td>
Outline the discoverability of data (metadata provision)
Outline the identifiability of data and refer to standard identification
mechanism. Do you make use of persistent and unique identifiers such as
Digital Object Identifiers?
Outline naming conventions used
Outline the approach towards search keyword
</td> </tr>
<tr>
<td>
</td>
<td>
5\.
</td>
<td>
Outline the approach for clear versioning
</td> </tr>
<tr>
<td>
</td>
<td>
6\.
</td>
<td>
Specify standards for metadata creation (if any). If there are no standards in
your discipline describe what type of metadata will be created and how
</td> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
1\.
2\.
</td>
<td>
Specify which data will be made openly available? If some data is kept closed
provide rationale for doing so
Specify how the data will be made available
</td> </tr>
<tr>
<td>
</td>
<td>
3\.
</td>
<td>
Specify what methods or software tools are needed to access the data? Is
documentation about the software needed to access the data included? Is it
possible to include the relevant software (e.g. in open source code)?
</td> </tr>
<tr>
<td>
</td>
<td>
4\.
</td>
<td>
Specify where the data and associated metadata, documentation and code are
deposited
</td> </tr>
<tr>
<td>
</td>
<td>
5\.
</td>
<td>
Specify how access will be provided in case there are any restrictions
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
1\.
</td>
<td>
Assess the interoperability of your data. Specify what data and metadata
vocabularies, standards or methodologies you will follow to facilitate
interoperability.
</td> </tr>
<tr>
<td>
</td>
<td>
2\.
</td>
<td>
Specify whether you will be using standard vocabulary for all data types
present in your data set, to allow inter-disciplinary interoperability? If
not, will you provide mapping to more commonly used ontologies?
</td> </tr>
<tr>
<td>
Increase data re-use (through clarifying licences)
</td>
<td>
1\. 2.
3\.
</td>
<td>
Specify how the data will be licenced to permit the widest reuse possible
Specify when the data will be made available for re-use. If applicable,
specify why and for what period a data embargo is needed
Specify whether the data produced and/or used in the project is useable by
third parties, in particular after the end of the project? If the re-use of
some data is restricted, explain why
</td> </tr>
<tr>
<td>
</td>
<td>
4\.
</td>
<td>
Describe data quality assurance processes
</td> </tr>
<tr>
<td>
</td>
<td>
5\.
</td>
<td>
Specify the length of time for which the data will remain re-usable
</td> </tr>
<tr>
<td>
Allocation of resources
</td>
<td>
1\.
</td>
<td>
Estimate the costs for making your data FAIR. Describe how you intend to cover
these costs
</td> </tr>
<tr>
<td>
</td>
<td>
2\.
</td>
<td>
Clearly identify responsibilities for data management in your project
</td> </tr>
<tr>
<td>
</td>
<td>
3\.
</td>
<td>
Describe costs and potential value of long term preservation
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
1\.
</td>
<td>
Address data recovery as well as secure storage and transfer of sensitive data
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
1\.
</td>
<td>
To be covered in the context of the ethics review, ethics section of DoA and
ethics deliverables. Include references and related technical aspects if not
covered by the former
</td> </tr>
<tr>
<td>
Other
</td>
<td>
1\.
</td>
<td>
Refer to other national/funder/sectorial/departmental procedures for data
management that you are using (if any)
</td> </tr> </table>
TABLE 1: FAIR Data Management -Horizon 2020 DMP
Page of **21**
<table>
<tr>
<th>
**DMP Component**
</th>
<th>
**WP1**
**Neuronal recordings**
</th>
<th>
**WP2**
**Network Modelling**
</th>
<th>
**WP3**
**Hardware Controller**
</th>
<th>
**WP4**
**Smart House interface & control **
</th> </tr>
<tr>
<td>
Data Summary
</td>
<td>
1. Predictive brain activity and predictive neural planning and the technological use of these results.
2. Data types will be multiunit neuronal data [Cerebus (Blackrock Microsystem) file types: *.nex, *.nsx, *.ccf]; behavioural data: *.csv; and videotracking data [Plexon CinePlex Studio (Dallas, Texas, United States)]:
*.dv3, *.avi, *.dvt.
4. N/A.
5. Neuronal data is recorded from the PRR, PMd, M1 and SMA regions. (6) Expected size of the 128 channels neuronal data will be from GBs to TBs.
(7) Neuronal data will be transferred to WP2.
</td>
<td>
(1) Simulated spike train data, model network data, and simulation / virtual
experiment parameter data. (2) Data will provide virtual point of reference
and comparison for primate data for which ground truth is known.
3. Spike data in .npy and
HDF5, model data in .npy, HDF5, CSV or .txt, parameters in CSV or .txt (may
eventually migrate to PANDAS).
4. N/A.
5. Neural network simulations.
6. Uncertain; depends on completeness of network ground truth required; will be between ~10 MB and 1 GB per virtual experiment. (7) Uni-Goe, Primate Lab, SDU.
</td>
<td>
(1), (2) Data represents network topologies and methods (software) for complex
temporal signal processing of recordings or brain model outputs for the device
interface, i.e., smart house interface.
3. Data type will be C, C++, and VHDL programming codes and network topology diagrams providing implementation independent documentation.
4. A part of the software is from our MOdular Robot COntrol environment.
5. Embodied AI and Neurorobotics Lab at SDU.
6. Probably 5 GB.
7. The data will be useful for smart house control (WP4) within the project and other developers as well as robotic community in the domain of FPGA-based signal processing and control. In a more general perspective, the presented methods are useful for generic brain machine interfaces working on task and intention recognition level.
</td>
<td>
(1), (2) Service interface data with universAAL service into Living Lab. Data
for communication with uviversAAL Smart House living Lab service.
3. Data Type will be universAAL Device Ontology and JSON.
4. N/A.
5. Smart House Living Lab.
6. Not know, probably kbs.
7. The data will be used for the FPGA of WP3.
</td> </tr> </table>
<table>
<tr>
<th>
FAIR Data.
Making data findable, including provisions for metadata.
</th>
<th>
(1), (2), (3), (6) Converted neuronal data are discoverable and recognizable
through Hierarchical Data Format (HDF5), comma separated files (metadata).
4. N/A.
5. WP1 does not apply versioning on the raw data.
</th>
<th>
(1), (2), (3) HDF5 and / or PANDAS.
(4), (5) N/A.
(6) Included timestamp, model parameters, and virtual experiment parameters.
</th>
<th>
(1), (2), (3) Data (developed software) are discoverable and recognizable
through a
Git repository
(https://github.com).
4. Plan4Act software, smart house control, brain signal processing.
5. Clear versioning will be identified by Git commit message.
6. N/A.
</th>
<th>
(1), (2), (3) Data are discoverable and recognizable through universAAL
platform by using RDF and ontologies.
4. N/A
5. Clear versioning will be used during development taking into account backward compatibility.
6. N/A.
</th> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
1. Some parts of the recorded neuronal data (already converted into HDF5 format) will be made openly available within the consortium.
2. Neuronal, behavioural and video-taped data will be hosted on institutional local servers with individualized access to the consortium.
3. Still in development. (4) Local servers at DPZ.
</td>
<td>
1. Spike data and parameter files will be made available within the consortium, as will some elements of simulation code after publication of papers for which the same simulation framework is used.
2. Data will be stored on GWDG servers; spike train data and simulation code will be made public following publications according to standards of journals.
3. Still in development / negotiation.
4. Local GWDG servers.
5. Password protected SSH; subject to change.
</td>
<td>
1. Data (software for FPGAbased complex temporal signal processing, smart house interface, and device control) will be made openly available through GitHub. Special methods are not required to access the data.
2. Data and related information will be hosted and available on GitHub.
(3), (4) N/A.
(5) Data will be publicly available.
</td>
<td>
(1), (3) Data describing devices will be opened within universAAL platform
though REST interface with authentication.
(2) Data will be hosted at LST servers and client code will be available on
the LST GitLab server.
(5) Access will be managed through JSON Web Token computed with shared
secrets.
</td> </tr> </table>
<table>
<tr>
<th>
Making data interoperable
</th>
<th>
(1), (2) Neuronal data will be converted to standard HDR5 format. Comma
separated files will be used for recording metadata and behavioural data.
</th>
<th>
1. All data and metadata is open format, pythonreadable.
2. Standard where applicable, reference readme / cheat-sheet provided otherwise.
</th>
<th>
(1), (2) Standards and interoperability will be provided as a manual of using
the software. This will be available through GitHub.
</th>
<th>
(1), (2) Standards and interoperability will be provided through universAAL.
</th> </tr>
<tr>
<td>
Increase data reuse (through clarifying licences)
</td>
<td>
(1), (2) Data from WP1 will be licenced within the consortium under request.
(3) Unlikely for raw data. (4) Automated quality assurance.
(5) As long as servers remain active.
</td>
<td>
1. Data will be openly licensed within consortium, and under GNU or CC license after publications.
2. As quickly as possible within consortium; in accordance with journal standards following publication.
3. Unlikely for data; possible for simulation code (which is based on an already existing framework), but we maintain flexibility. (4) Readability and unit check upon storage on subsample of data.
(5) As long as servers remain active.
</td>
<td>
(1), (2) Data will be licensed under the free Software Foundation GNU
licenses.
3. Data will be available and useable by third parties including after the end of the project.
4. Unit tests on sample data will provide automated quality assurance, manual tests will
be performed on demonstrators.
5. Data will be available online for undefined length of time on GitHub.
</td>
<td>
(1), (2) Data will be licensed with credential under requests.
3. The reuse of data will be available under request after the end of the project.
4. N/A.
5. The data will remain available fur undefined length of time. They will be integrated into Smart House Living Lab.
</td> </tr>
<tr>
<td>
Allocation of resources
</td>
<td>
(1) Short-term costs will be covered by the Plan4Act project. Long-term costs
will be covered by the DPZ. (2) DPZ is responsible of WP1 data.
(3) Currently not known.
</td>
<td>
1. No additional costs anticipated.
2. UGoe is responsible for relevant data.
3. Currently uncertain, no costs anticipated.
</td>
<td>
1. No costs are expected.
2. SDU is responsible of WP3 data.
3. No costs.
</td>
<td>
(1) No costs are expected. (2) UPM is responsible of WP4 data.
(3) No costs.
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
(1) Recorded neuronal, behavioural, metadata and video-taped data will be
stored on local computers, local servers with additional backup systems
applied locally.
</td>
<td>
(1) Data stored on local
UGoe / GWDG servers with additional local backups by
GWDG.
</td>
<td>
(1) Data backup will be done by DeIC data (DeIC – Danish eInfrastructure
Cooperation).
</td>
<td>
(1) Data and service backup, access control and encryption with common
protocols HTTPS, JWT, etc..).
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
Ethic approval is available on use of animals.
</td>
<td>
N/A
</td>
<td>
N/A
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Other
</td>
<td>
N/A
</td>
<td>
N/A
</td>
<td>
N/A
</td>
<td>
N/A
</td> </tr> </table>
TABLE 2: FAIR Data Management- Plan4Act.
**3\. Plan4act Data Management Plan**
Sections below establish the base for the Data Management Plan.
On one hand, we present the Data Life Cycle (DLC) and, on the other hand, we
present the flow of data generated by our project.
The data life cycle provides a high-level overview of the stages involved in
successful management and preservation of data for use and reuse (Figure 1).
Plan4Act data flow shows the path for the data we create on our project
(Figure 2).
With this combination, we show the relationship established between DLC and
how the data meets the established requirements over their life (Figure 3).
**Figure 1: Data management life cycle**
The Data Life Cycle has six components:
**“CREATING DATA”:** description of the data that will be created, and how the
data will be managed and made accessible throughout its lifetime. Data are
accurately and thoroughly described using the appropriate metadata standards.
**“PROCESSING DATA”:** description of the data that will be captured, checked
and validated, as well as where they will be stored.
“ **ANALYSING DATA”:** data are analysed, interpreted and produce research
outputs.
“ **PRESERVING DATA”:** data are submitted to an appropriate long-term
archive.
“ **GIVING ACCESS TO DATA”:** Internally distributing data, sharing data,
controlling access.
**“RE-USING DATA”:** data must be provided under terms that permit re-use and
redistribution including the intermixing with other datasets.
**Figure 2: Data creation, sharing, and flow between WPs in the project.**
**Figure 3: Data flow within and across WPs for Plan4Act**
Based on the previous diagram “Data flow within and across WPs for Plan4Act”
(Figure 3), we identify four different sources of data that corresponds to the
work packages in the project.
In the following, we describe the objectives for each WP to illustrate more
clearly the data transitions and flows.
First source identified is **Neuroscientific** data that correspond to **WP1 –
“Neuronal recordings** ”. The objective of WP1 is to identify neural activity
patterns in the monkey neocortex predicting planned action sequences in
complex environments. The first objective is to establish advanced SmartCage
equipped with human-like haptic interfaces and a behavioural paradigm for
action sequence planning in unrestraint monkeys. The second objective is to
record massively parallel neural brain activity wirelessly from many
individual neurons simultaneously (100-200 microelectrodes) in 2-3
sensorimotor areas of the cerebral cortex during such action-sequence
planning.
Second source corresponds to “ **WP 2- Network modeling”** .
Based on the experimental data from WP1, the central goal of WP2 is to develop
a neural network model predicting the sequence of planned actions based on the
interaction between the input and the selforganized structure of the network.
These results serve as basis for the neural controller developed in WP3.
Third source corresponds “ **WP3- Hardware controller”.**
The central objective of this WP is to develop a generic hardware adaptable
network controller based on cell assemblies developed in WP2. The standalone
adaptable controller will directly interface to the neural recording system
(WP1) and the smart house system (WP4). It will process recorded sequence-
predicting neural activity (WP1), predict the upcoming sequence of actions,
and generate the corresponding complex action sequences to manipulate the
smart house (WP4).
Fourth and last source corresponds “ **WP4- Smart House interface &
controller” ** .
The central objective of this WP deals with transferring sequence-predicting
neural activity of monkey (WP1) thorough the adaptive neural control (WP3) for
smart house controls.
Given the identified sources and Data Management Life cycle, described before,
as a fundament, in the following sections we will describe the type of data
generated during this project in each work package on each of the data
management steps.
Some work packages might use only parts of the life cycle; for instance, WPs
involved in meta-analysis might focus on the Create, Process, and Analyze
steps, while a WP focusing on primary data collection and analysis might
bypass the Create and Analyze steps. In addition, other WP s might not follow
the linear path depicted here, or reiterating the cycle might be necessary.
The data life cycle serves as a navigation tool for the Plan4Act Project,
facilitating partners in discovering recommendations on how to effectively
work with their data across all stages of the data life cycle.
# 3.1 Creating data
## 3.1.1 WP1 – Neuronal recordings
**Classify data:**
* 128 channel neuronal data is recorded by Cerebus Neural Signal Processor (Blackrock Microsystem, Salt Lake City, Utah, United States). Couple of Cerebus file types will be recorded. The .nev files contain digitized extracellular electrode spike information. The *.nsx files contain the digitized raw data using continuous sampling rate. Configuration information of the recording are saved into *.ccf files (Cerebus Configuration File).
* Monkeys are video-taped during the experiment. Video data is recorded by Plexon CinePlex Studio (Dallas, Texas, United States) with four cameras as input sources. The frame rate of the recorded video is 60 fps. 3D Video Tracking information of the monkey`s movement is saved into *.dv3 file. Video format is *.avi, and the program saves digital video tracking information as well using *.dvt text format.
## Describe data and create metadata
* Neuronal data is recorded by Blackrock high-performance DAQ system using Cerebus Neural Signal Processor. Details of the file specifications are available online.
(http://support.blackrockmicro.com/KB/View/166838-file-specifications-packet-
details-headersetc).
* Metadata of the neuronal and the video data is stored in comma separated files (cvs file type).
* Metafile has the following information about the recorded data (behavioural, electrophysiological data).
* Date of the recording
* Name of the online sorted spike data
* Headstage informations
* Orientation of the antenna
* BR_bank
* Input area (e.g. PMd, PRR, M1)
* Behavioral and log data of the task
* Name of the video file
* Name of the monkey
* Experimenter
* Name of the task
* Config file of the task
* Notes
### 3.1.2 WP 2- Network modeling
**Classify data:**
Python simulation code, spike data, and metadata will be produced.
## Describe data and create metadata
● All simulations will be done in Python. Code will be made available. At
least initially, the BRIAN simulation framework will form the core of the
simulator. Results will be saved, depending on specifics, in .npy, HDF5, or
PANDAS format, with parameters saved in plaintext (.txt) or CSV, possibly
integrated into PANDAS when applicable.
**3.1.3 WP3- Hardware controler**
## Classify data
WP3 is working on recordings obtained from Monkey experiments (WP1) and
Neuronal Network Modelling (WP2/3). These data sets are subject to the
management life cycle defined by the corresponding collaborators. Reduced test
data sets will be provided directly with the source code.
## Describe data and create metadata
The processed brain-activity data for controlling devices in the smart house
of UPM is intended for online-use. The processed command stream for the smart
house is not stored, as it reprocessing with the provided methods will
regenerate the output stream (derived data). Links to published data from
WP1/2 will be provided with the programming codes.
**3.1.4 WP4- Smart House interface & controler. **
## Classify data
* Request/Response data to universAAL Smart House controller service.
Format will be:
* Json for service interface,
* Device Ontology of universAAL contextbus
**Describe data and create metadata**
* 3D Living Lab data model.
# 3.2 Processing Data
**3.2.1 WP1 – Neuronal recordings**
## Data capture process
* Task control software is written in C++ programming environment and the task control software uses animal and task specific configuration files in comma separated files to run the appropriate task. Neuronal data are recorded from the parietal reach region, dorsal premotor cortex, primary motor cortex and supplementary motor cortex using chronically implanted floating microelectrode arrays. During the experimental recording, we use real-time behavioural video surveillance to track the movements of the animals. We dye one of the wrist of the animals. That dyed point serves as tracking point of the arm. Cover of the wireless transmitter positioning in the top of the head of the animal serves as second tracking point. These video-tracking information are used as a behavioural control of the plan and movement related neuronal signals.
**Check, validate.**
* First, comma separated metafiles are used to match behavioural, video-tracked and neuronal data. Behavioural data will be pre-processed in Matlab.
Second, video-tracked data will be pre-processed in Plexon CinePlex Studio.
Plexon CinePlex Studio software will be used to obtain the three-dimensional
spatial coordinates of the tracking point. Third, recorded neuronal data will
be pre-processed using Plexon Offline Sorter then the data (e.g. spike, LFP
data) will be converted to HDF5 format. HDF5 format is widely used file format
to store and to organize large amounts of data. The following pre-processing
steps will be done in Matlab environment (MathWorks, Natwick, Massachusetts,
United States).
## Manage and store data in local database
The recorded neuronal, behavioural, video and metadata will be stored
locally on the recording computers. After each recording session, these data
are backed up into our local servers (e.g. NAS).
## Transfer data
Pre-processed and extracted neuronal signals will be transferred to WP2
using local file servers with individual access to the consortium.
**3.2.2 WP 2- Network modeling**
## Data capture process
* Data saved directly from simulation.
**Check, validate.**
* Random validation checks on storage.
**Manage and store data in local database**
* Data directly saved on local storage and GWDG servers.
## Transfer data
Access to local servers as possible, details still in negotiation.
**3.2.3 WP3- Hardware controler**
## Plan consent for internal sharing
Data will be internally shared through owncloud/nextcloud of the ENS lab
with password protection.
## Local data location
Offline data will be stored in standard hdf5 format in accordance to the DPZ
data storage. Online data between the Embedded FPGA-based signal processing
and control platform and that smart house, will be wrapped in a JSON-based
protocol which will be sent over HTTPS for security.
### 3.2.4 WP4- Smart House interface & controller
Data for service interface are processed at each service call in real-time in
order to change device states. Data related to the 3D Living Lab simulator are
processed into a binary application.
## Plan consent for internal sharing (data shared, consent)
3D Living Lab simulator will be based on Open Scene Graph Binary (osgb)
format, anyway it will be embedded into a binary application.
## Local data location
Sensible data that are related to the identification of the devices of the
Smart House Living Lab are stored in MySQL database owned by LST.
# 3.3 Analysing Data
**3.3.1 WP1 – Neuronal recordings**
## Interpret data
Statistical analysis will be done in Matlab and R programming environment
using HDF5 and/or comma separated files (*.csv). Matlab and R codes will be
saved as *.m and *.r files, respectively. For illustration purposes we use
*.jpg, *.png, *.svg file types.
## Produce research outputs
Research outputs are produced by Matlab and R Scientific manuscript(s) will be
made by Microsoft (Redmond, Washington, United States) Word software.
**3.3.2 WP 2- Network modeling**
## Interpret data
All data analysis will be done in Python. Results will be saved, depending on
specifics, in .npy, HDF5, or PANDAS format.
## Derive data
All simulations will be done in Python. Code will be made available. At least
initially, the BRIAN simulation framework will form the core of the simulator.
Results will be saved, depending on specifics, in .npy, HDF5, or PANDAS
format, with parameters saved in plaintext (.txt) or CSV, possibly integrated
into PANDAS when applicable.
## Produce research outputs
All research will come from Python; technical writing will be performed in
LaTex.
**3.3.3 WP3- Hardware controler**
## Interpret data
Data analysis will be implemented in C++ and Python, data will be stored in
HDF5 and plaintext (CSV) files, whatever is most appropriate in the context.
The FPGA will be programmed in VHDL.
## Derive data
Derived data will be provided implicitly by the generating programs with
suitable parameter sets.
## Produce research outputs
Technical writing will be performed in LaTeX, the research output will be the
processed by C++ and Python programs.
**3.3.4 WP4- Smart House interface & controler. **
N/A
**3.4 Preserving Data.**
Each WP preserve data as entities that will work with their own data, as well
as a common project where data sets will be shared and preserved.
## WP1- Neuronal recordings
From the beginning of the project, all raw data collected in the
neurophysiological experiments of WP1 will be stored on a central
institutional server. Data on this server is subject to automated backups (a)
for mirroring, i.e. keeping a mirror-image of the data on hardware which is
physically separate from the original data-storage, and (b) for incremental
backup, i.e. providing long-term storage of the data without overwriting or
deleting older versions of the data.
## WP4- Smart House interface & controller
Data of WP4 related to the service interaction are preserved into the servers
of LST.
Backup and applications developed during the project will be also available
into the GitLab repository of LST (
_https://gitlab.lst.tfo.upm.es/users/sign_in_ ).
**3.5 Giving access to Data.**
Internal Access will be provide for each partner to ensure accessibility,
security and confidentiality of the data.
## WP1- Neuronal recordings
From the beginning of the project, neurophysiological and metadata collected
in WP1 will be available for the consortium partners upon request. Data will
be made accessible via commercial file sharing software (OwnCloud)
administered by the institute’s IT department.
## WP4- Smart House interface & controller
Data access in WP4 is provided through REST services in JSON format. These
services allow to the clients coming from WP1 and WP3 to interact with the
devices of the Smart House Living Lab. In order to access to the services,
they are required:
confidentiality and authentication.
Confidentiality is guaranteed by using HTTPS connection that hide the data
sent into the requests to the services, while authentication is guarantee
through the secure signatures based in Json Web Token [ _rcf7519_ ] with
expiration time.
**3.6 Re-Using Data.**
## WP1- Neural recordings
Collection of empirical data in cognitive systems neuroscience with non-human
primates is very demanding and time-consuming, resulting in project durations
of typically a few years. At the same time, the complexity of the data is
substantial.
The high value of the data and its complexity lead to a situation, in which a
successfully collected data set is mostly being used over many years following
the end of the immediate data collection period, with publications being
published sometimes 10 years later.
To make this possible, data is stored and documented in a way that analysis of
the data over many years is possible by different scientists in the lab and by
collaboration partners.
**WP4- Smart House interface & controller. **
Services and their related data, developed in WP4, will be available and
reused for further projects and applications also after the lifetime of
Plan4Act.
**3.7 Plan4Act Open Access.**
Open access can be defined as **the practice of providing on-line access to
scientific information that is free of charge to the reader** .
In the context of Research and Development, Open Access typically focuses on
access to “scientific information”, which refers to two main categories:
Peer-reviewed scientific research articles (published in academic journals);
Scientific research data (data underlying publications and/or raw data).
The European Commission sees Open Access not as an end in itself but as a tool
to facilitate and improve the circulation of information and transfer of
knowledge in the European Research Area (ERA) and beyond.
### 3.7.1 Open Research Data Pilot in Horizon2020
**Open data** is data that is free to use, reuse, and redistribute. The Open
Research Data Pilot aims to make the research data generated by Horizon 2020
projects open.
Requirements are:
* Develop (and keep up-to-date) a Data Management Plan (DMP).
* Deposit your data in a research data repository.
* Make sure third parties can freely access, mine, exploit, reproduce and disseminate it.
* Make clear what tools will be needed to use the raw data to validate research results (or provide the tools themselves).
This deliverable “Data Management Plan" itself is part of the requirements.
Other requirements will be achieved providing the free data access and the
tools required.
Plan4Act aims to deposit the research data needed to validate the results
presented in the deposited scientific publications, ideally via a data
repository.
We consider use the OpenAIRE/CERN solution created to H2020 outputs the ZENODO
repository. Plan4Act repository will be created on Zenodo under Science
Community.
_https://www.openaire.eu_ _https://zenodo.org/_ .
# 3.8 Conclusions
This deliverable shows the data creation flow along this project. At this
stage, Data Management Plan for Plan4Act is a live document that will be
updated in accordance with the stages of the project.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1230_INJECT_732278.md
|
# Executive Summary
INJECT is an Innovation Action that supports technology transfer to the
creative industries; under the call for “action primarily consisting of
activities directly aiming at producing plans and arrangements or designs for
new, altered or improved products, processes or services” (H2020 Innovation
Action). To achieve its project aims INJECT has been tested and plans have
been created to establish an INJECT spin-off business in the journalism market
through its ecosystem developments. While user testing and testing of the tool
in operational environments has aided in the development and technical
improvements of the INJECT technology.
The INJECT tool is new to journalism and to European markets; this second data
management plan covers to the end of the project June 2018 in testing and
validation of both technical and economic performance in real life operating
conditions provided by the journalism market domain. This project therefore
has limited scientific research activities.
This document aims to present the updated data management plan for INJECT
project, the considerations, actions and activities that have been undertaken
with an aim to deliver on the objectives of the project. The data management
plan deliverable was first introduced in version one as a living document.
This document updates the discussion on the INJECT data types and applies the
FAIR data management process to ensure that, wherever possible, the research
data is findable, accessible, interoperable and reusable (FAIR), and to ensure
it is soundly managed until project end.
# Purpose of the Data Management Plan
This deliverable of the INJECT project is prepared under WP5 and the Task 5.2
_INJECT Data Management Plan (2 nd version) _ . In this task we update the
discussion of the data management life cycle, processes and/or generated by
the INJECT project and to make the data findable, accessible, interoperable
and reusable (FAIR) that was reported in _INJECT Data Management Plan (1 st
version) _ .
# INJECT Data Types
This updated version of the Data Management Plan addresses the questions as
noted in the previous version 1, updating where actions were underway and
further considerations that were made as the project developed. As previously
noted, the INJECT project is an H2020 Innovation Action, and hence is not
intended to generate scientific data per se, therefore the data management
plan considers the activities undertaken within the project.
**2.1 What is the purpose of the data collection/generation and its relation
to the objectives of the project?**
The three stated INJECT project objectives are:
Obj1: Extend and aggregate the new digital services and tools to increase the
productivity and creativity of journalists in different news environments
Obj2: Integrate and evaluate the new digital services and environments in CMS
environments
Obj3: Diffuse the new digital services and support offerings in news and
journalism markets
Data collection and generation related to each is to enable the co-creation
then effective evaluation of the INJECT tool, and scientific reporting of
research and innovation that will deliver each of these objectives.
**2.1.1 What types and formats of data has the project generated/collected?**
The project has generated and collected the following types and formats of
data:
* Co-created user requirements on the INJECT tool and services: format is structured text requirements;
* Parsed and semantic-tagged news stories from online digital news sources (including partner news archives) as part of INJECT toolset: format is PostgreSQL database, the processed/parsed results are stored into an external Elastic Search Cluster for later searching;
* Semantic-tagged news stories used to inform design of INJECT creative search strategies: format is structured documents of news stories, associated word counts and other observed patterns, by story type;
* Statistical data about the impact of news stories on social media.
* Usability evaluation reports of INJECT tool by journalists: format is structured written reports;
* Semi-structured interview data about INJECT tool use by journalists: format is documented, content-tagged notes from semi-structured interviews;
* Focus group reports about INJECT tool use by journalists: format is structured reports of focus group findings;
* INJECT tool activity log data, recording meaningful activities of tool users over selected time periods: format is structured spreadsheet;
* Corpus of news stories generated by journalists using the INJECT tool: format is structured database of news stories and related data attributes;
* Quantitative creativity assessments of news stories generated by journalists with and without use of the INJECT tool: format is structured spreadsheet;
* Economic and contract data about each launched INJECT ecosystem: format is structured spreadsheet;
* Complying with duties and obligations under the General Data Protection Regulation (GDPR); no identifiable personal data will be published and the identifiable data will not be shared with any other organisation. Participants in activities have had consent explained and their rights acknowledged to withdraw any time and have their data removed. All partners will store data in compliance with GDPR.
**2.1.2 Will you re-use any existing data and how?**
The following data is reused from existing news sources:
* Parsed and semantic-tagged news stories from online digital news sources (including partner news archives) as part of INJECT toolset: format is the raw news article data is stored in a
PostgreSQL database, the processed/parsed results are stored into an external
Elastic Search Cluster for later searching;
* Semantic-tagged news stories used to inform design of INJECT creative search strategies: format is structured documents of news stories, associated word counts and other observed patterns, by story type;
* Corpus of news stories generated by journalists using the INJECT tool: format is structured database of news stories and related data attributes.
**2.1.3 What is the origin of the data?**
The reused data originates from selected news sources. These news sources can
be identified by country, as noted in figure 1: unique news sources by
country, and by language as noted in figure 2: unique news sources by
language.
_Figure 1: Unique news sources by country_
0
20
40
60
80
100
120
Australia
Bhutan
Czech Republic
EU
Greece
India
Italy
Netherlands
NZ
Qatar
Spain
Thailand
Number of unique news sources
countries
Unique news sources per country
_Figure 2: Unique news sources by language_
0
20
40
60
80
100
120
140
ENG
FRA
GER
GR
NL
NOR
SPA
IT
number of unique news sources
language
Unique news sources per language
_Figure 3: Unique news sources listed_
**2.1.4 Norwegian news sources**
The Norwegian ecosystem utilises specific news sources from archives belonging
to Hallingdolen,
Hordaland Avis, and Sunnhordaland. The number of articles from these archives
is approximately
60,000 for the period covered 1 st January 2015 to 17 th February 2018. As
the first ecosystem for INJECT is further established in Norway there will be
more sources that may be added, such as internal archives, statistical bureau
information, and public data (maps, weather, traffic). It is further noted
that this list will expand with further ecosystem developments as more
newspapers and others from the journalistic domain became customers in the
future.
**2.1.5 Data generated during the project arises from:**
* A user-centred co-design process with journalists and news organisations;
* Knowledge acquisition and validation exercises with experienced journalists for each of the 6 INJECT creative search strategies;
* Data- and information-led design of each of the 6 INJECT creative search strategies;
* Formative and summative evaluations of INJECT tool use by journalists and news organisations.
* Original content created by journalists and news organisations who choose to contribute to public Explaain card content.
**2.1.6 What is the expected size of the data?**
The expected sizes of the data varies by types:
* Documents and reporting describing the user requirements, user activity logs and qualitative results from formative and summative evaluations of the INJECT tool, including the corpus of generated news stories, will be small – deliverable reports with short data appendices;
* Parsed and semantic-tagged news stories from online digital news sources (including partner news archives) as part of INJECT toolset will be large. The current data set at m18 of the project is over six million articles.
**2.1.7 To whom might it be useful ('data utility')?**
The INJECT project data might be useful to:
* News organisations and IT providers who will target the news industry, to inform their development of more creative and productive news stories, to support the competitiveness of the sector;
* News organisations and IT providers who wish to develop new forms of business model through which to deliver digital technologies to the news and journalism sectors;
* Journalism practitioners who will extrapolate from project results in order to improve journalism practices across Europe.
* Academics and University departments and Institutes that could use the INJECT data for research and teaching purposes.
# FAIR data
## Making data findable, including provisions for metadata
As stated previously INJECT is an Innovation Action that supports technology
transfer to the creative industries; it has tested and planned for an INJECT
spin-off business in the journalism market through its ecosystem developments.
The INJECT tool is new to journalism and to European markets and the intention
is that it becomes a sought after commercially viable product. This viability
will require the product to be sold and to earn revenue, from both its
subscribed use and innovations made through paid for adaptations. It will be
necessary that some types of information are sold specifically to customers
and therefore cannot be in the public domain.
The FAIR framework asks:
* Are the data produced and/or used in the project discoverable with metadata, identifiable and locatable by means of a standard identification mechanism (e.g. persistent and unique identifiers such as Digital Object Identifiers)?
* What naming conventions do you follow?
* Will search keywords be provided that optimize possibilities for re-use?
* Do you provide clear version numbers?
* What metadata will be created? In case metadata standards do not exist in your discipline, please outline what type of metadata will be created and how.
The following table provides INJECT’s current answers to these questions.
_Table 1: Making data findable._
<table>
<tr>
<th>
**Data type**
</th>
<th>
**Discoverable?**
</th>
<th>
**Reuse and metadata conventions**
</th> </tr>
<tr>
<td>
Co-created user requirements on the INJECT tool and services
</td>
<td>
No
</td>
<td>
The single user requirements document will be extracted from project
deliverables, and posted in an acceptable form on the INJECT project website.
</td> </tr>
<tr>
<td>
Parsed and semantic-tagged news stories from online digital news sources as
part of INJECT toolset
</td>
<td>
Yes
</td>
<td>
All news stories will be searchable through the INJECT tool and advanced
search algorithms, which have APIs. News stories are tagged with semantic
metadata about article nouns and verbs, and person, place, organisation and
activity entities. The meta-data types are currently bespoke standards, to
allow tool development to take place.
</td> </tr>
<tr>
<td>
Semantic-tagged news stories used to inform design of INJECT creative search
strategies
</td>
<td>
No
</td>
<td>
The news stories will be collated in one or more online documents. Each news
article will be metatagged with data about the article’s length, presence and
number of keywords, and other observations.
</td> </tr>
<tr>
<td>
Statistical data about the impact of news stories on social media
</td>
<td>
Yes
</td>
<td>
All generated statistical data will be retrievable through the INJECT tool.
</td> </tr>
<tr>
<td>
Usability evaluation reports of INJECT tool by journalists
</td>
<td>
No
</td>
<td>
The usability evaluation report content will not be made available for reuse.
Ethical approval does not allow for reuse and sharing.
</td> </tr>
<tr>
<td>
Semi-structured interview data about INJECT tool use by journalists
</td>
<td>
No
</td>
<td>
The semi-structured interview data will not be made available for reuse, as
ethical approval does not allow for its reuse and sharing.
</td> </tr>
<tr>
<td>
Focus group reports about INJECT tool use by journalists
</td>
<td>
No
</td>
<td>
The focus group data will not be made available for reuse, as ethical approval
does not allow for its reuse and sharing.
</td> </tr>
<tr>
<td>
INJECT tool activity log data, recording meaningful
</td>
<td>
Yes
</td>
<td>
Anonymous INJECT tool activity log data will be made available for sharing and
reuse, in line with
</td> </tr>
<tr>
<td>
activities of tool users over selected time periods
</td>
<td>
</td>
<td>
ethical consent from journalist users. Clear log data versions will be set up.
Data will be structured and delivered in XLS sheets, to allow analyst
searching and management of the data.
</td> </tr>
<tr>
<td>
Corpus of news stories generated by journalists using the INJECT tool
</td>
<td>
No
</td>
<td>
The corpus of news stories will not be made available directly for reuse by
the project, although published articles will be available, at their
publication source.
</td> </tr>
<tr>
<td>
Quantitative creativity assessments of selected news stories generated by
journalists with and without use of the INJECT tool
</td>
<td>
Yes
</td>
<td>
Anonymous quantitative creativity assessments of selected news stories
generated with and without the INJECT tool will be made available for sharing
and reuse, in line with ethical consent from the expert assessors. Clear log
data versions will be set up. Data will be structured and delivered in XLS
sheets, to allow analyst searching and management of the data.
</td> </tr>
<tr>
<td>
Economic and contract data about each launched INJECT ecosystem
</td>
<td>
No
</td>
<td>
The intention is that INJECT becomes a sought after commercially viable
product to be sold and to earn revenue, from both its subscribed use and
innovations made through paid for adaptations. It will be necessary that some
types of information are sold specifically to customers and therefore cannot
be in the public domain.
</td> </tr> </table>
## Making data openly accessible
The FAIR framework asks:
* Which data produced and/or used in the project will be made openly available as the default? If certain datasets cannot be shared (or need to be shared under restrictions), explain why, clearly separating legal and contractual reasons from voluntary restrictions.
* How will the data be made accessible (e.g. by deposition in a repository)?
* What methods or software tools are needed to access the data?
* Is documentation about the software needed to access the data included?
* Is it possible to include the relevant software (e.g. in open source code)?
* Where will the data and associated metadata, documentation and code be deposited?
Preference should be given to certified repositories that support open access
where possible.
* Have you explored appropriate arrangements with the identified repository?
* If there are restrictions on use, how will access be provided?
* Is there a need for a data access committee?
* Are there well-described conditions for access (i.e. a machine readable license)?
* How will the identity of the person accessing the data be ascertained?
The following table provides INJECT’s current answers to these questions for
data that will be made available for sharing in the project subject to GDPR
compliance.
_Table 2: Openly accessible data._
<table>
<tr>
<th>
**Data type**
</th>
<th>
**Open?**
</th>
<th>
**How will data be accessed**
</th> </tr>
<tr>
<td>
Co-created user requirements on the INJECT tool and services
</td>
<td>
Yes
</td>
<td>
The single user requirements document will be posted on the project website,
with clear signposting and instructions for use.
</td> </tr>
<tr>
<td>
Parsed and semantictagged news stories from online digital news sources as
part of INJECT toolset
</td>
<td>
No
</td>
<td>
The parsed and semantic-tagged news stories will not be made publicly
available. This data represents core commercial value of the INJECT tool, and
will be not shared, except through INJECT tools made available as part of the
commercial ecosystems.
</td> </tr>
<tr>
<td>
Semantic-tagged news stories used to inform design of INJECT creative search
strategies
</td>
<td>
Yes
</td>
<td>
The news stories will be published in online documents that will be accessible
via the INJECT’s restricted project website and associated storage space. The
stories will be stored and edited using standard MS Office applications, which
users will need to edit them. A validated user log-in to the restricted area
of the INJECT project website will be needed to access and download the
stories.
</td> </tr>
<tr>
<td>
Statistical data about the impact of news stories on social media
</td>
<td>
No
</td>
<td>
The generated statistical data will not be made publicly available. This data
represents core commercial value of the INJECT tool, and will be not shared,
except through INJECT tools made available as part of the commercial
ecosystems.
</td> </tr>
<tr>
<td>
INJECT tool activity log data, recording meaningful activities of tool users
over selected time periods
</td>
<td>
Yes
</td>
<td>
The INJECT tool activity log data will be published in online documents that
will be accessible via the INJECT’s restricted project website and associated
storage space. The log data will be stored and edited using standard MS Office
applications, which users will need to edit them. A validated user log-in to
the restricted area of the INJECT project website will be needed to access and
download the log data.
</td> </tr>
<tr>
<td>
Quantitative creativity assessments of selected news stories generated by
journalists with and
</td>
<td>
Yes
</td>
<td>
The collected quantitative assessments will be published in online documents
that also will be accessible via the INJECT’s restricted project website and
associated storage space. The assessments will be stored and edited using
standard MS
</td> </tr>
<tr>
<td>
without use of the INJECT tool
</td>
<td>
</td>
<td>
Office applications, which users will need to edit them. A validated user log-
in to the restricted area of the INJECT project website will be needed to
access and download the quantitative assessments.
</td> </tr>
<tr>
<td>
Economic and contract data about each launched
INJECT ecosystem
</td>
<td>
No
</td>
<td>
The intention is that INJECT becomes a sought after commercially viable
product with innovations made through paid for adaptations. It will be
necessary that some types of information are sold/contracted to specific
customers and therefore cannot be in the public domain.
</td> </tr> </table>
## Making data interoperable
The FAIR assessment asks:
* Are the data produced in the project interoperable, that is allowing data exchange and re-use between researchers, institutions, organisations, countries, etc. (i.e. adhering to standards for formats, as much as possible compliant with available (open) software applications, and in particular facilitating re-combinations with different datasets from different origins)?
* What data and metadata vocabularies, standards or methodologies will you follow to make your data interoperable?
* Will you be using standard vocabularies for all data types present in your data set, to allow interdisciplinary interoperability?
* In case it is unavoidable that you use uncommon or generate project specific ontologies or vocabularies, will you provide mappings to more commonly used ontologies?
In response, the INJECT project will not seek to make its data interoperable
with other research data sets, and to enable data exchange and re-use between
researchers, institutions, organisations and countries. There are several
reasons for this decision:
* There are no established standards for data about digital tool use in journalism, to interoperate with;
* There are established standards for data about creativity support tool use in computer science, to interoperate with, although a standardized survey metric for digital creativity support has been developed by US researchers, which the INJECT project will submit to.
To compensate, the INJECT project will make its data available in the most
open tools available, for example the MS Office suite, and to provide
sufficient documentation to enable understanding and use by other researchers
subject to GDPR compliance.
## Increase data re-use (through clarifying licences)
The FAIR framework asks:
* How will the data be licensed to permit the widest re-use possible?
* When will the data be made available for re-use? If an embargo is sought to give time to publish or seek patents, specify why and how long this will apply, bearing in mind that research data should be made available as soon as possible.
* Are the data produced and/or used in the project useable by third parties, in particular after the end of the project? If the re-use of some data is restricted, explain why.
* How long is it intended that the data remains re-usable?
* Are data quality assurance processes described?
Data re-use is a live consideration for INJECT as the tool is technically
developed and ecosystems established. City and the Innovation Manager are
leading an exploration into the registrations of one or more trademarks for
the project. The current recommended action for public documents, such as the
website, have been marked with the copyright symbol (©), name and the year of
creation: Copyright © The INJECT Consortium, 2017. Data protection aspects of
the project will be coordinated across the relevant national data protection
authorities.
The new regulations and data protection rules that came into force May 2018
have necessitated each project partner and coordinator to consider all matters
relating to data storage, processing, etc. The project coordinator has in
place a Data Protection Officer to assist in these matters.
In addition, an ongoing investigation into Intellectual Property rights will
continue with the development of ecosystems beyond the project.
## Allocation of resources
The FAIR framework asks:
* What are the costs for making data FAIR in your project?
* How will these be covered? Note that costs related to open access to research data are eligible as part of the Horizon 2020 grant (if compliant with the Grant Agreement conditions).
* Who will be responsible for data management in your project?
* Are the resources for long term preservation discussed (costs and potential value, who decides and how what data will be kept and for how long)?
The FAIR framework has a minimum impact on INJECT. INJECT’s resources for
managing the FAIR framework are built into the project’s work plan. For
example:
* The development and management of the INJECT data types and sets is incorporated into and budgeted for in the current work plan;
* Overall data management will be undertaken by the project manager role at the project coordinator partner City, University of London.
The resources for long-term preservation of the INJECT data sets have been
reviewed. All primary research data about the design and evaluation of INJECT
will be stored and made available for 10 years. Most such data will be stored
in digital form on computer servers of INJECT’s university partners – City
University of London, University of Bergen, ICCS and the University of
Groningen. Digital data will be stored as computer files to be available by
password only, and stored on encrypted devices such as laptops and server hard
drives. Non-digital data, such as questionnaires and consent forms, will be
stored locally in locked filing cabinets. At the end of the period, all
digital data will be deleted. Physical documents will be shredded on the
premises of the universities using a cross cut shredder which conforms to
standard DIN level 5 (maximum size of paper is 0.8mm x 12mm) and then disposed
of via confidential waste management contracts. The standard is consistent
with the UK’s Health & Social Care Information Centre (HSCIC).
## Data security
The FAIR framework asks:
* What provisions are in place for data security (including data recovery as well as secure storage and transfer of sensitive data)?
* Is the data safely stored in certified repositories for long-term preservation and curation?
INJECT stores the processed/parsed results into an Amazon Elastic Search
Cluster. Amazon Elasticsearch Service routinely applies security patches and
keeps the Elasticsearch environment secure and up to date. INJECT controls
access to the Elasticsearch APIs using AWS Identity and Access Management
(IAM) policies, which ensure that INJECT components access the Amazon
Elasticsearch clusters securely. Moreover, the AWS API call history produced
by AWS CloudTrail enables security analysis, resource change tracking, and
compliance auditing.
## Ethical aspects
The FAIR framework asks:
* Are there any ethical or legal issues that can have an impact on data sharing? These can also be discussed in the context of the ethics review.
* Is informed consent for data sharing and long-term preservation included in questionnaires dealing with personal data?
The INJECT consortium have not identified any specific ethics issues related
to the work plan, outcomes or dissemination. We do note that individual
partners will adhere to ethical rules. At City, University of London the data
management and compliance team continue to update all policies and procedures
on ethics and data use. We continue to work to the current data protection
policy with a commitment to protecting and processing data with adherence to
legislation and other policy. “Sensitive data shall only be collected for
certain specific purposes, and shall be obtained with opt in consent” will
apply to all personal data collected and any participants provided fair
processing notices about the use of that data. The project will adhere to the
commitment to holding any data in secure conditions, and will make every
effort to safeguard against accidental loss or corruption of data.
# Summary and Outlook
This subsequent INJECT deliverable D5.2 revisits and updates the data
management plan D5.1. The considerations, actions and activities undertaken
alongside the delivery on the objectives of the project under “The FAIR Data
Principles provide a set of milestones for data producers” (Wilkinson et al,
2016) applying the FAIR data management of research data that is findable,
accessible, interoperable and reusable (FAIR).
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1233_GATES_732358.md
|
## Introduction
This deliverable (D1.4) is the third and final version of the Data Management
Plan (DMP) for the GATES project.
GATES is a HORIZON 2020 beneficiary project for the “H2020-ICT-2016-2017”
call; therefore, the updated DMP is submitted through this deliverable,
followed by the final review DMP at month 30\.
### What is a DMP
Data Management Plan (or DMP in short), is an official document that defines
the research data life cycle within the project. The data life cycle is a
collection of the steps that data must go through to ensure proper management
and reusability of the collected and/or generated data. Though it may slightly
differ between projects based on the nature of the data. The foundation of
every data lifecycle is depicted in Figure 1 and includes the following steps:
1) creating data, 2) processing data, 3) analyzing data, 4) preserving data,
5) giving access to data and 6) re-using data.
**The purpose of the Data Management Plan** is to assess and define up front
all of the aspects of data management, metadata creation, analyzation and
storage, and to ensure proper and sound management of the research data that
will be collected, processed and generated within the GATES project.
### Objectives of the initial DMP
The current deliverable has on purpose to ensure proper and sound management
of the research data that will be collected, processed and generated within
GATES. The concrete objectives of the document are to (a) detail the handling
of research data during and after the project, (b) describe the methodology
and standards required, (c) identify whether and how data will be shared,
exploited or made accessible for verification, and re-used, and (d) identify
how they will be curated and preserved.
### DMP is constantly evolving
DMP is not an “isolated” deliverable. It begins at day one of the project and
aims at defining all of the data management details, but it constantly evolves
and gains precision and substance throughout the lifespan of the project.
## Methodology
The European commission has issued a document called “ _Guidelines on FAIR
Data Management in Horizon_
_2020_ ” 1 . This document serves as a guideline to assist the HORIZON 2020
beneficiaries on the creation of the DMP and introduces the FAIR principle.
According to the aforementioned document, “ _Horizon 2020 beneficiaries should
make their research data**F** indable, **A** ccessible, **I** nteroperable and
**R** eusable (FAIR), to ensure it is soundly managed _ ”. Additionally, the
document provides its reader with a DMP template in ANNEX 1, outlining all the
sections that a HORIZON DMP deliverable must contain along with aid in the
form of questions to be answered for each section.
Furthermore, EU has published an “Open access & Data management” 2 guideline
to “explain the rules on open access to scientific peer reviewed publications
and research data that beneficiaries have to follow in projects funded or co-
funded under Horizon 2020”. OpenAire 3 , has issued a similar document
called “Briefing Paper Research Data Management” 4 , in which it
demonstrates the need for a DMP and how to draft one. Furthermore, OpenAire,
in cooperation with EUDAT 5 , has conducted webinars on the subject and has
publicly posted them online 6 .
GATES consortium, after careful consideration of the previously mentioned
material, drafted this DMP document, using the template provided by the
European Commission and the knowledge extracted from the other sources. Each
of the following sections represent a specific dataset of the GATES project.
## Dataset No 1: Project Management derived data
### Data Summary
The purpose of this dataset (data collection) is to document all of the data
produced during the Project Management Activities from the consortium of the
GATES project. During the lifespan of the project, many data types were
produced and to maintain data availability and safety the consortium created a
folder in a dedicated server, where all members have access.
**Presentations:** Several PowerPoint presentations have been created (project
meeting presentations etc) during the projects lifespan. The purpose of these
presentations vary depending on whether it’s a dissemination material
responsible of promoting GATES or if it’s part of a deliverable. These files
were created by the project member responsible of each deliverable and sent to
the coordinator in the form of .pptx files, using Microsoft PowerPoint
software. The overall size of the presentations is approximately _964Mb_ .
Besides the presentations of the partners, several data were stored in the
projects private server. The following table presents the folder structure of
the projects private area along with a minor description of what is contained
in each one:
<table>
<tr>
<th>
**Folder name**
</th>
<th>
**Data description**
</th> </tr>
<tr>
<td>
**Deliverables**
</td>
<td>
This folder contains all of the deliverables of the GATES project as they were
submitted in the EU portal. Each deliverable is placed on a folder named after
the Work Package it corresponds to.
</td> </tr>
<tr>
<td>
**Dissemination**
</td>
<td>
In this folder, all of the material that was used to disseminate the project
is saved in folders with the name of the event it was used in.
</td> </tr>
<tr>
<td>
**DMP**
</td>
<td>
Folder that contains the data for each dataset as described in the Data
Management Plan document.
</td> </tr>
<tr>
<td>
**GA**
</td>
<td>
The Grant Agreement in pdf format for reference
</td> </tr>
<tr>
<td>
**Meetings**
</td>
<td>
The presentations for each meeting is stored in this folder.
</td> </tr>
<tr>
<td>
**WP6 Communication**
</td>
<td>
Material that was created under Work Package 6 and will be used for the
dissemination activities of the project.
</td> </tr>
<tr>
<td>
**Other docs**
</td>
<td>
Helper folder to store generic data relevant to the project.
</td> </tr> </table>
**Table 1: GATES private foler structure**
**Deliverables:** The Project Management dataset produced four (4)
deliverables:
1. **D1.1 Project Management Handbook:** to present the main aspects related to GATES management summarizing the organizational structure, operating procedures and management tools of the project.
2. **D1.2 Data Management Plan & Support Pack: ** specify how data will be collected, processed, monitored, catalogued, and disseminated during the project’s lifetime.
3. **D1.3 Project Interim Report:** To document the progress of the project mid-term **4\. D1.4 Project Final report:** a summary of GATES project activities and work.
The responsible partner of each deliverable documented it and delivered it to
the consortium in the form of a word document(.docx) for review before the
submission to the EU portal, whereas, the final version will be saved as a PDF
document(.pdf) and stored in the projects private server repository. The
expected size of each deliverable was around 4-5Mb
No existing data was used or reused for neither of the previously mentioned
data. On the contrary, new data were created during the lifespan of the GATES
project. Furthermore, these data were useful mostly between the project
partners as a reference tool.
### Fair Data
#### Making data findable, including provisions for metadata
The use of metadata for the project management dataset was not necessary,
mainly because most of this dataset is confidential and will not be made
findable. Additionally, no versioning is required since the data will be in
pdf format and will not be able to change in the future.
The deliverables which were openly available are publicly hosted in the web
portal, so any interested party will be able to access them via the
navigational menu and the search bar by querying the name of the deliverable
(i.e. data management plan). The naming convention of the data was decided to
be:
* GATES - Data Management Plan deliverable.pdf
* GATES - Interim Report deliverable.pdf
* GATES - Final Report deliverable.pdf
#### Making data openly accessible
The data in this collection, except for the three aforementioned deliverables,
will not be openly available as they are confidential as a part of the inner-
processes of the project and only accessible through the members of the
consortium. They will be stored in a folder on the projects private server,
which is shared only with the partners of the project.
All of the deliverables (public or confidential), are available in a pdf file
format, which can be opened with the use of the “Adobe Acrobat Reader”
software (free). The deliverables will be openly accessible without any
restrictions.
All of the public deliverables have been uploaded to the projects’ portal and
can be accessed via _http://www.gates-game.eu/en/project/results_
#### Making data interoperable
Not applicable for this particular dataset.
Increase data re-use (through clarifying licenses):
As identified in the previous sections, most of this dataset will not be
openly accessible, so in extend, there can be no data re-use for it. For the
deliverables that are openly accessible (D1.2, D1.3, D1.4), the “ODC Public
Domain Dedication and Licence” will be used to allow potential parties to
freely use them. The quality of the data will be assured by the consortium and
will be made available from the moment they are uploaded in the web portal (
_www.gates-game.eu_ ) for as long as the website is up and running.
### Allocation of resources
All of the costs regarding the project management activities have been
estimated and included in the budget of the project in the form of person
months for Work Package 1.
### Data Security
All of the documents derived by this dataset is stored in GATES private server
and shared with the members of the consortium. The pdf files of the three
public deliverables are also hosted in the server of the web portal, following
the security protocols of the hosting service. The private server performs
periodical backups to a secure external location to ensure data recovery.
**Ethical Aspects**
Not applicable.
**Other**
Not applicable.
## Dataset No 2: User requirements data
### Data summary
The purpose of this dataset is to identify the strengths and weaknesses of the
customer segments and end-users together with their requirements and
expectations. The requirements will be used for establishing the template to
gather data flow information, the definition of the process model and the
definition of the learning strategies based on customers and end-users’ needs.
To that end, the partner from Serbia (InoSens), designed a questionnaire which
was answered from farmers (InoSens, Serbia), agricultural students (AUA,
Greece), smart farming technologies specialists (ANSEMAT, Spain) and company
representatives (ANSEMAT, Spain). The completion of these questionnaires
provided the consortium with data regarding: demographic information, general
information, behavioral requirements (functional) and development quality
attributes (non-functional).
This dataset consists of a Microsoft Word document (.docx) that is the
questionnaire that was used and is approximately ~33Kb and a Microsoft Excel
document (.xlsx) which is the result of the processing of the survey input.
Furthermore, from this dataset a user requirements deliverable is created
(D2.1), presenting the information that was extracted.
Finally, this dataset will be extremely useful first of all to the consortium
itself for designing GATES as effective as possible and then for researchers
that require data on the end-user needs in an educational game. No existing
data will be used or reused.
### FAIR Data
#### Making data findable, including provisions for metadata
The complete dataset is hosted in the private server of the system and shared
between the members of the consortium. The results of the analysis (in form of
document) will be hosted in the server of the web portal to be made publicly
available for users to download upon EU approval. No metadata was created,
since this dataset is not to be altered in the future, thus, no versioning is
required. The users will be able to locate the desired document using the
navigational menu of the web portal and by querying with the keyword “user
requirements” in the search bar.
Finally, the following naming convention is used for this dataset:
* GATES_WP2.1_User_requirements_deliverable.pdf
* GATES_WP2.1_Questionnaire_results.xlsx
#### Making data openly accessible
The deliverable along with the template of the questionnaire is openly
available via GATES web portal without any restrictions on the access. The
questionnaire results file containing the responses of the participants will
be kept private as it contains confidential information, but the insight
extracted by the analyzation of those answers will be part of the deliverable
and publicly available.
The deliverable is a pdf file which can be accessed via the Adobe Acrobat
Reader (Free).
The user requirements deliverable has been uploaded to the SyGMa platform and
labeled as **approved** , so it publicly available in the projects portal. The
questionnaire of the user requirements analysis survey is incorporated into
the deliverable under the Annex 1 section; where the questionnaire results
excel file is stored on the projects’ private folder and flagged as private.
Making data interoperable The deliverable file will be “as is” in a pdf format
and can only be used as a reference, whereas the template of the questionnaire
will be in a Word document format which is the most commonly used text format
and also allows the user to export it in multiple formats and further process
it.
Increase data re-use (through clarifying licenses):
This insight extracted by the analyzation will be openly available in the
website of the system and with no restriction for anyone to use according to
his needs based on the PDDL licence. The data will be made available for re-
use from the moment it is uploaded in the web portal and will remain reusable
for as long as the website is up and running. The quality of the data is
assured by the GATES team through validation tests on the inputs.
**ALLOCATION OF RESOURCES**
No additional cost is required for this dataset as it is included in the
person months cost of WP2.
### DATA SECURITY
Since the deliverable document is openly available in the web portal no extra
security measurements are needed. The data will also be stored in the projects
private server for safekeeping and periodical backups to external server will
ensure data recovery in any case of server failure.
### ETHICAL ASPECTS
The data in the questionnaire results excel file fall into the ethical aspect
area. The original questionnaire results contain personal information of the
participant and cannot be made public. To maintain the anonymity and privacy
of the respondents, the questionnaire results excel file will be private in
the server and restricted even among the partners. The coordinator (AUA) is
responsible of maintaining the anonymity and distribute the file in the
interested partners with caution.
**OTHER**
Not applicable
## Dataset No 3: Material Collection Data
### Data Summary
The purpose of this dataset is the development of the content that will be
used for teaching and improving learner’s SFT skills. This internal material
database will be used to support management and monitoring of all the
collected materials during the project implementation phase.
The sources of the collected material were the _World Wide Web_ , digital and
printed material which was digitized. During the lifespan of the project 172
files have been created for numerous sources containing vital information that
later was used to feed the Library module of GATES. The size of this dataset
is approximately _390Mb_ and is currently hosted in the projects private
server only accessible by the members of the consortium.
This dataset will serve a vital role in the development of GATES, providing
the educational content for the developers to incorporate into the game.
### FAIR DATA
#### Making data findable, including provisions for metadata
The material collection dataset is placed in the project private server and
shared with the consortium members _only_ . This dataset is confidential as
GATES is a commercial project, thus, no procedures to make it findable will be
conducted. This dataset is accompanied with metadata, allowing the detection
of outdated material. The metadata information are:
<table>
<tr>
<th>
**Creator**
</th>
<th>
The consortium partner responsible of the creation
</th> </tr>
<tr>
<td>
**Title**
</td>
<td>
The label of this dataset
</td> </tr>
<tr>
<td>
**Resource type**
</td>
<td>
The format of the dataset
</td> </tr>
<tr>
<td>
**Date**
</td>
<td>
The date this dataset was collected
</td> </tr> </table>
Making data openly accessible
Not applicable. (Confidential Data)
Making data interoperable
Not applicable. (Confidential Data)
Increase data re-use (through clarifying licenses)
Not applicable. (Confidential Data)
**ALLOCATION OF RESOURCES**
Not applicable. (Confidential Data)
**DATA SECURITY**
The data will be stored in the projects private server following its security
and backup protocols.
**ETHICAL ASPECTS**
Not applicable. (Confidential Data)
**OTHER**
Not applicable. (Confidential Data)
## Dataset No 4: Current Smart Farming Technologies (SFT) data
### Data summary
The purpose of this dataset is to assist/enable the creation of the algorithms
and models for developing the main game mechanics of GATES related to the
simulation of the agricultural environment and the environmental and economic
benefits of using SFT. The source of this dataset was the outcome of an
investigation on the current SFT (machinery, actuators, sensors, services,
methodologies) in terms of yield, power consumption and CO2 emissions. The
“Material Data Collection” dataset (No 2) will be re-used by the consortium
for the investigation.
This generated data exist in the following formats:
* C# code git repository containing all of the agricultural algorithms
* A Microsoft Excel document (.xlsx), listing the current SFT machinery with its average market price and benefits in terms of fuel reduction, efficiency and quality.
* An Adobe Reader PDF file (.pdf), deliverable 2.2, where the material collected in Task 2.2 (Material collection, classification and evaluation) was converted to algorithms and was modelled and documented for developing the main game mechanics of GATES.
The git repository containing the algorithms is hosted in the private server
of the _Mad About Pandas_ and strictly accessible only by their developers and
the _AUA_ developers that assist on the algorithm creation.
The SFT list excel file, is hosted in the projects private folder and at the
moment this deliverable is written is approximately 13Kb. It is not yet
finalized as it only contains the SFTs that are supported in the current
version of the game (MVG2) and it is constantly updated with new data as the
game development evolves. The following table shows the data structure of the
excel file:
<table>
<tr>
<th>
**Parameter**
</th>
<th>
**Input**
</th>
<th>
**Unit**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
**Name/ID**
</td>
<td>
</td>
<td>
txt
</td>
<td>
Unique name to identify SFT.
</td> </tr>
<tr>
<td>
**Type**
</td>
<td>
</td>
<td>
List
(Sensor
Machinery
Service)
</td>
<td>
Type of machine.
</td> </tr>
<tr>
<td>
**Area**
</td>
<td>
</td>
<td>
List
(Tillage,
Seeding, Irrigation etc.)
</td>
<td>
Agricultural operation the SFT is used in
</td> </tr> </table>
<table>
<tr>
<th>
**Crop**
</th>
<th>
</th>
<th>
List of available crop
</th>
<th>
Crop(s) that the SFT applies
</th> </tr>
<tr>
<td>
**Short**
**Description**
</td>
<td>
</td>
<td>
txt
</td>
<td>
Short in-game description that explains what the SFT does. Will be displayed
in the shop.
</td> </tr>
<tr>
<td>
**Long**
**Description**
</td>
<td>
</td>
<td>
txt
</td>
<td>
Extended in-game description that explains the technology behind the SFT.
</td> </tr>
<tr>
<td>
**Info-Link**
</td>
<td>
</td>
<td>
hyperlink
</td>
<td>
Link to GATES library with additional information about the SFT.
</td> </tr>
<tr>
<td>
**Soil Moisture**
</td>
<td>
</td>
<td>
%
</td>
<td>
The amount of additional precision in determining the soil moisture
</td> </tr>
<tr>
<td>
**Soil Nutrition**
</td>
<td>
</td>
<td>
%
</td>
<td>
The amount of additional precision in determining the required amount of
fertilizer.
</td> </tr>
<tr>
<td>
**Crop Infection**
</td>
<td>
</td>
<td>
%
</td>
<td>
The amount of additional precision in determining the crop inflection
</td> </tr>
<tr>
<td>
**Crop Ripeness**
</td>
<td>
</td>
<td>
%
</td>
<td>
The amount of additional precision in determining the optimal time for
harvesting
</td> </tr>
<tr>
<td>
**Weather Forecast**
</td>
<td>
</td>
<td>
%
</td>
<td>
The additional precision in forecasting weather.
</td> </tr>
<tr>
<td>
**Yield Increase**
</td>
<td>
</td>
<td>
%
</td>
<td>
Increase in yield by using the SFT compared to conventional farming.
</td> </tr>
<tr>
<td>
**Labour**
**Reduction**
</td>
<td>
</td>
<td>
%
</td>
<td>
Decrease in labour by using the SFT compared to conventional farming.
</td> </tr>
<tr>
<td>
**Fuel**
**Consumption**
</td>
<td>
</td>
<td>
%
</td>
<td>
Effect on fuel consumption from using the SFT compared to conventional
farming.
</td> </tr>
<tr>
<td>
**GHG reduction**
</td>
<td>
</td>
<td>
%
</td>
<td>
Reduction of GHG emissions from using the SFT compared to conventional
farming.
</td> </tr>
<tr>
<td>
**Fertilization Efficiency**
</td>
<td>
</td>
<td>
%
</td>
<td>
Increase in Fertilization Efficiency by using the SFT compared to conventional
farming.
</td> </tr>
<tr>
<td>
**Seeding**
**Efficiency**
</td>
<td>
</td>
<td>
%
</td>
<td>
Increase in Seeding Efficiency by using the SFT compared to conventional
farming.
</td> </tr>
<tr>
<td>
**Tillage**
**Efficiency**
</td>
<td>
</td>
<td>
%
</td>
<td>
Increase in Tillage Efficiency by using the SFT compared to conventional
farming.
</td> </tr>
<tr>
<td>
**Irrigation Efficiency**
</td>
<td>
</td>
<td>
%
</td>
<td>
Increase in Irrigation Efficiency by using the SFT compared to conventional
farming.
</td> </tr>
<tr>
<td>
**Spraying Efficiency**
</td>
<td>
</td>
<td>
%
</td>
<td>
Increase in Spraying Efficiency by using the SFT compared to conventional
farming.
</td> </tr>
<tr>
<td>
**Harvesting Efficiency**
</td>
<td>
</td>
<td>
%
</td>
<td>
Increase in Harvesting Efficiency by using the SFT compared to conventional
farming.
</td> </tr>
<tr>
<td>
**Initial Price**
</td>
<td>
</td>
<td>
€
</td>
<td>
Initial price that needs to be paid to acquire the SFT.
</td> </tr>
<tr>
<td>
**Yearly fee Payments**
</td>
<td>
</td>
<td>
€/year
</td>
<td>
Yearly costs for using the SFT. Includes subscription fees for services,
insurance, maintenance etc.
</td> </tr>
<tr>
<td>
**Training cost**
</td>
<td>
</td>
<td>
€/day
</td>
<td>
Cost for training per day
</td> </tr>
<tr>
<td>
**Depreciation**
</td>
<td>
</td>
<td>
years
</td>
<td>
Reduction of value over year
</td> </tr>
<tr>
<td>
**Task Cost Impact**
</td>
<td>
</td>
<td>
€/Use
</td>
<td>
The financial impact on the task. Might be a positive value (increasing costs)
or a negative value (reducing costs)
</td> </tr>
<tr>
<td>
**Scale**
</td>
<td>
</td>
<td>
ha
</td>
<td>
Number of ha the SFT can operate on in a cropping season
</td> </tr>
<tr>
<td>
**Start Age**
</td>
<td>
</td>
<td>
years
</td>
<td>
The age of SFT at the time it is purchased. (Used SFTs only)
</td> </tr>
<tr>
<td>
**Deterioration**
</td>
<td>
</td>
<td>
formula
</td>
<td>
The reduction in efficiency by the SFT based on age.
</td> </tr>
<tr>
<td>
**Lifetime**
</td>
<td>
</td>
<td>
years
</td>
<td>
The economic life of a machine
</td> </tr>
<tr>
<td>
**Complexity**
</td>
<td>
</td>
<td>
%
</td>
<td>
Initial efficiency for using the SFT. Depends on how complex the SFT is and
can be improved over time.
</td> </tr>
<tr>
<td>
**Requires SFT**
</td>
<td>
</td>
<td>
tags
</td>
<td>
List of Tags associated with SFTs _one_ of which is required to use this SFT.
</td> </tr>
<tr>
<td>
**Exclusive**
**Effects with**
**SFTs**
</td>
<td>
</td>
<td>
tags
</td>
<td>
List of Tags associated with SFTs that don't have cumulative effects with this
SFT.
</td> </tr>
<tr>
<td>
**Tags**
</td>
<td>
</td>
<td>
tags
</td>
<td>
List of Tags associated with this SFT to be used for functional decencies with
other SFTs.
</td> </tr> </table>
**Tabla 1: SFT Data information**
The deliverable is 2.2 is a PDF file of approximately 3.43Mb and is stored in
the projects private server in order to be accessible by the members of the
consortium.
To conclude, the contents of this dataset is used internally from the
consortium members to design and develop the game mechanics. All of the data
is confidential and shared strictly only among the partners.
### FAIR DATA
#### Making data findable, including provisions for metadata
This dataset is placed in the project private server and shared with the
consortium members only. It is confidential because GATES is a commercial
project and this dataset is part of the inner-process, thus, no procedures to
make it publicly findable were of will be conducted. This dataset is
accompanied with metadata, allowing the detection of outdated material. The
metadata information will be:
<table>
<tr>
<th>
**Creator**
</th>
<th>
The consortium partner responsible of the creation
</th> </tr>
<tr>
<td>
**Title**
</td>
<td>
The label of this dataset
</td> </tr>
<tr>
<td>
**Resource type**
</td>
<td>
The format of the dataset
</td> </tr>
<tr>
<td>
**Date**
</td>
<td>
The date this dataset was collected
</td> </tr> </table>
The consortium has decided on the following naming conventions per file:
GATES – SFT Market.xlsx
GATES – Modelling and simulation algorithms deliverable.pdf
#### Making data openly accessible
All of the data in this dataset is not openly available as they are
confidential as part of the inner-processes of the project and only accessible
through the members of the consortium. They are stored in a folder on the
projects private server, which is shared only with the consortium partners to
limit accessibility.
Making data interoperable
Not applicable. (Confidential Data)
Increase data re-use (through clarifying licenses)
Not applicable. (Confidential Data)
**ALLOCATION OF RESOURCES**
Not applicable. (Confidential Data)
**DATA SECURITY**
The data will be stored in the projects private server following its security
and backup protocols.
**ETHICAL ASPECTS**
Not applicable. (Confidential Data)
**OTHER**
Not applicable. (Confidential Data)
## Dataset No 5: Educational Content Data (Library Module)
### Data Summary
The purpose of this dataset is the development and collection of the data that
will be used to teach SFT skills in an effective way and provide more
engagement for the trainees/learners. Specifically, this dataset along with
the storyboarding (Task 3.2) will be the foundation of the Library Module of
GATES, responsible of educating the players on the SFT and assist them on the
objectives of the game.
To create this dataset, the SFT data collection (Dataset No 3) was re-used and
the following file formats have been produced in the scope of the project:
* **Online PDF Documents** : Manuals of the SFT machinery, flyers, presentations and other education material that was collected and/or created which are available in PDF format.
* **Videos** : Tutorials, instructions and other video material are available in .mp4 format to download. Videos are also uploaded to the YouTube channel 7 of the project to view and share.
* **Images** : part of the educational content will be in the form of images that will be provided in the most used formats, JPEG, PNG, GIF.
Specifically, this dataset can be categorized into two distinct groups: 1) the
private/confidential data and the 2) public access data.
_Private/Confidential Data_
This data consists of the raw information/data that was collected and contains
the list of the SFTs that will be included into the library module, the
articles that contain the information for the machinery and the bibliography
that they were based upon. This dataset is approximately _272Mb_ and is hosted
in the private server of the project where only the partners have access to
it.
_Public Data_
This data consists of information that is publicly available to the players of
the game and any interested stakeholder. It is approximately _178Mb_ and
consists of videos that are uploaded and openly accessible via the projects
YouTube channel. Additionally, this dataset is also hosted in the private
server of the project for data integrity and availability issues.
The specific videos that are online at the moment this deliverable is written
are:
<table>
<tr>
<th>
**Video name**
</th>
<th>
**Data description**
</th> </tr>
<tr>
<td>
**01_Gates_Account_Basics.mp4** 8
</td>
<td>
It presents the basics on how to login and create a new account
</td> </tr>
<tr>
<td>
**02_Gates_SFT_Showcase.mp4** 9
</td>
<td>
his video presents the new module of the Smart Farming Technologies (SFT)
showcase
</td> </tr>
<tr>
<td>
**03_Gates_SFT_Training.mp4** 10
</td>
<td>
This video presents the new SFT training module, which aims to show the basic
principles behind some SFT.
</td> </tr>
<tr>
<td>
**04_GATES_Main_Story.mp4** 11
</td>
<td>
This video presents the Main Story module where 8 scenarios have been created
to explain different aspects of the game
</td> </tr>
<tr>
<td>
**05_My_Scenarios.mp4 12 **
</td>
<td>
This video presents how to create new custom scenarios to play the game
</td> </tr>
<tr>
<td>
**06_Gates_Realtime.mp4** 12
</td>
<td>
This video presents how to play real time mode.
</td> </tr>
<tr>
<td>
**07\. GATES Library.mp4 14 **
</td>
<td>
This video presents GATES library http://gates-library.agenso.gr/, where the
player could access to multimedia information regarding the SFT
</td> </tr>
<tr>
<td>
**GATES Introduction to 2nd**
**Minimum Viable Game.mp4** 13
</td>
<td>
Showcases the 2nd Minimum Viable Game (2MVG) of the GATES Smart Farming
Simulation Platform.
</td> </tr> </table>
This dataset is extremely useful to the players of the game and to researchers
that require access to the current SFT technologies.
### FAIR DATA
#### Making data findable, including provisions for metadata
Portion of this dataset is hosted only on the projects private
server(Private/confidential data), where the Public Data is hosted in the web
portal server for it to be openly accessible. Each dataset is accompanied by
the following metadata to ensure its quality:
<table>
<tr>
<th>
**Creator**
</th>
<th>
The consortium partner responsible of the creation
</th> </tr>
<tr>
<td>
**Title**
</td>
<td>
The label of this dataset
</td> </tr>
<tr>
<td>
**File type**
</td>
<td>
The format of the dataset (pdf, pptx etc.)
</td> </tr>
<tr>
<td>
**File Size**
</td>
<td>
The size of the dataset
</td> </tr>
<tr>
<td>
**Version**
</td>
<td>
The version number of the dataset
</td> </tr>
<tr>
<td>
**Date**
</td>
<td>
The date this dataset was created
</td> </tr>
<tr>
<td>
**Info**
</td>
<td>
Information of the dataset in case any clarification is necessary.
</td> </tr> </table>
_Private/Confidential Data_
Only members of the consortium are able to access this data through the
private server. Specifically, to facilitate easier access, there is a specific
folder dedicated to each SFT where it contains the following files:
<table>
<tr>
<th>
**File name convention**
</th>
<th>
**Data description**
</th> </tr>
<tr>
<td>
**{SFT NAME}-farmer.docx**
</td>
<td>
The information/article of this SFT for user with _Farmer_ access rights
</td> </tr>
<tr>
<td>
**{SFT NAME}-professionals.docx**
</td>
<td>
The information/article of this SFT for user with _Professionals_ access
rights
</td> </tr>
<tr>
<td>
**{SFT NAME}-students.docx**
</td>
<td>
The information/article of this SFT for user with _Students_ access rights
</td> </tr>
<tr>
<td>
**{SOURCE}.pdf**
</td>
<td>
Several pdfs of the bibliography used for the creation of the articles for
this SFT
</td> </tr> </table>
_Public Data_
Through GATES Library platform, the visitors are able to locate this data
either by following the menu hierarchy, or by using the search bar and
querying the name of the dataset as keyword. The platform also allows its
visitors to search using predefined categories and based on their access
rights ( _Farmer, Student, Professional_ ) it shows the equivalent
information.
#### Making data openly accessible
All content in the _Public Data_ category is openly accessible via the GATES
Library platform without any restriction on the access. To access them the
users will need to navigate to the library website, locate the desired SFT and
preview all of its information. All of media content (images, videos etc) is
incorporated into the pages and can be previewed without the need for any
external software. The web-portal can be accessed by the visitors through the
browser of his choice (Firefox, Chrome, Microsoft Edge etc.) and by navigating
to _http://gates library. agenso.gr/ _ .
_All content that rely on the “Private/Confidential Data” is restricted and
accessible only from the members of the consortium._
#### Making data interoperable
This dataset will be educational content to be used “as is” for reference and
educational material. In extend no interoperable operations are to be made.
#### Increase data re-use (through clarifying licenses)
The partners have decided that the “Public Data” of this dataset will be open
for everyone to view and download as an educational content but with no right
to reproduce it, as a set of this data will be provided to GATES by third
parties with their own license and specific permissions will be required.
**ALLOCATION OF RESOURCES**
Not applicable (Covered by the person months of WP3).
### DATA SECURITY
Since the data are openly available in the web portal no extra security
measurements are needed. A backup of this dataset will be stored in the
projects private server to ensure data recovery in case of data loss.
**ETHICAL ASPECTS**
Not applicable.
**OTHER**
Not applicable.
## Dataset No 6: Game Backend Data (Logger Module)
### Data Summary
The purpose of this dataset is to track player behavior and get all insights
the consortium needs to improve the game in the framework of the validated
learning process (WP5). Several data are recorded such as individual player
performance, reactions and results from the players and rating of the tasks by
the users to increase quality and performance.
No existing data were re-used for the collection of this dataset. This dataset
is a collection of information on the behavior of the players during the game
that was recorded automatically by the _Unity_ engine and provides the
consortium with the following information:
<table>
<tr>
<th>
**Data type**
</th>
<th>
**Data description**
</th> </tr>
<tr>
<td>
**Operating system**
</td>
<td>
The type of operating system under which the players accessed the game
(Windows, Linux, MacOS)
</td> </tr>
<tr>
<td>
**Browser**
</td>
<td>
The browser used to start the game (Firefox, Chrome, Opera etc)
</td> </tr>
<tr>
<td>
**End State**
</td>
<td>
The result of the game (Won or Lost)
</td> </tr>
<tr>
<td>
**User ID**
</td>
<td>
The ID of the current player
</td> </tr>
<tr>
<td>
**Scenario Spending**
</td>
<td>
The expenditures of the player during the game play
</td> </tr>
<tr>
<td>
**Scenario Income**
</td>
<td>
The money the players earned during the game play
</td> </tr>
<tr>
<td>
**Scenario ID**
</td>
<td>
The ID of the scenario that was played
</td> </tr> </table>
The game behavior data are stored in files per month and each file size can
vary from 1Kb to much higher according to the number of scenarios that are
recorded.
### FAIR DATA
Making data findable, including provisions for metadata
The data on this dataset is confidential and with restriction on the access
only by the members of the consortium.
Making data openly accessible
Not applicable (Confidential data)
Making data interoperable
Not applicable (Confidential data)
Increase data re-use (through clarifying licenses)
Not applicable (Confidential data)
**ALLOCATION OF RESOURCES**
Not applicable (Confidential data)
### DATA SECURITY
All of the data will be hosted in the projects private server for safekeeping.
Periodical backup mechanisms will be established to ensure data recovery.
**ETHICAL ASPECTS**
Not applicable (Confidential data)
**OTHER**
Not applicable (Confidential data)
## Dataset No 7: GATES Game Data
### Data Summary
The purpose of this dataset is to collect the data needed for the development
of the game core, such as graphics, animations, gameplay, interface, audio
etc. The origin of this data was the personal work from the members of the
consortium so no data re-use is available.
The data is in numerous formats, based on its type. For example,
* **3-D modelling data** will be in Blender format (.fbx)
* **Animations data** in Maya format(.ANIM)
* **Educational material and the info tips** will be in the form of text database records. **Images** in .jpeg and .svg vector files
During the lifetime of the project several files have been created
(approximately _37Mb_ of compressed .fbx files) and saved to the projects
private server and accessed by the consortium members only. This dataset is
extremely valuable to the developers of the game and to the members of the
consortium in general.
### FAIR DATA
#### Making data findable, including provisions for metadata
The content of this dataset is hosted in the private server of the project and
restricted to the consortium members only. Moreover, it is accompanied by the
following metadata table to ensure its quality and allow the partners to
locate the desired version:
<table>
<tr>
<th>
**Creator**
</th>
<th>
The consortium partner responsible of the creation
</th> </tr>
<tr>
<td>
**Title**
</td>
<td>
The label of this dataset
</td> </tr>
<tr>
<td>
**File type**
</td>
<td>
The format of the dataset (.fbx, .annim, etc)
</td> </tr>
<tr>
<td>
**File Size**
</td>
<td>
The size of the dataset
</td> </tr>
<tr>
<td>
**Version**
</td>
<td>
The version number of the dataset
</td> </tr>
<tr>
<td>
**Date**
</td>
<td>
The date this dataset was created
</td> </tr>
<tr>
<td>
**Info**
</td>
<td>
Information of the dataset in case any clarification is necessary.
</td> </tr> </table>
Making data openly accessible
Not applicable (Confidential data)
Making data interoperable
Increase data re-use (through clarifying licenses)
Not applicable (Confidential data)
**ALLOCATION OF RESOURCES**
No additional cost is required for this dataset as it is included in the
person months cost of WP4.
### DATA SECURITY
All of the data will be hosted in the projects private server for safekeeping.
Periodical backup mechanisms will be established to ensure data recovery.
**ETHICAL ASPECTS**
Not applicable (Confidential data)
**OTHER**
Not applicable (Confidential data)
## Dataset No 8: Meteorological Data (Data module)
### Data Summary
The purpose of this dataset is to provide GATES with historical meteorological
data (temperature, humidity, precipitation etc.) from all over Europe for the
simulation of the agricultural conditions. These data are vital for the
algorithms to work as expected. The source of this dataset is online weather
services, which were queried by EU-climate zone, and the results stored in the
projects private server.
For the purposes of this dataset, excel file formats were created, containing
the response of the web services regarding the weather conditions of each
climate zone for the last 16 years. No existing data was re-used and the total
storage size for all of the created files is 4.71Mb and is expected to
increase to maintain the previous information (history). This dataset is vital
for the development of the simulation part of the game. The following table
shows the data types/information that is stored in each file:
<table>
<tr>
<th>
**Title**
</th>
<th>
**Unit**
</th> </tr>
<tr>
<td>
**Day**
</td>
<td>
Date
</td> </tr>
<tr>
<td>
**Average Temperature**
</td>
<td>
°C
</td> </tr>
<tr>
<td>
**Maximum temperature**
</td>
<td>
°C
</td> </tr>
<tr>
<td>
**Minimum temperature**
</td>
<td>
°C
</td> </tr>
<tr>
<td>
**Atmospheric pressure at sea level**
</td>
<td>
hPa
</td> </tr>
<tr>
<td>
**Average relative humidity**
</td>
<td>
%
</td> </tr>
<tr>
<td>
**Total rainfall and / or snowmelt**
</td>
<td>
mm
</td> </tr>
<tr>
<td>
**Average visibility**
</td>
<td>
Km
</td> </tr>
<tr>
<td>
**Average wind speed**
</td>
<td>
Km/h
</td> </tr>
<tr>
<td>
**Maximum sustained wind speed**
</td>
<td>
Km/h
</td> </tr>
<tr>
<td>
**Maximum speed of wind**
</td>
<td>
Km/h
</td> </tr>
<tr>
<td>
**Indica whether there was rain or drizzle**
</td>
<td>
the total days it rained
</td> </tr>
<tr>
<td>
**Indica if it snowed**
</td>
<td>
the total days it snowed
</td> </tr>
<tr>
<td>
**Indicates whether there storm**
</td>
<td>
Total days with thunderstorm
</td> </tr>
<tr>
<td>
**Indicates whether there was fog**
</td>
<td>
Total days with fog
</td> </tr> </table>
### FAIR DATA
#### Making data findable, including provisions for metadata
This dataset is restricted and accessible only among the members of the
consortium. It is stored in the projects private folder and can be accessed
using the Excel software. Particularly, this dataset consists of 18 excel
files, one for each supported area. The naming convention used is {area}_{Date
From: YYYY}-{Date To: YYYY}.csv, i.e. Athens_2000-2016.csv.
This dataset is accompanied by metadata information to assure the quality of
the service. The metadata include the following information:
<table>
<tr>
<th>
**Creator**
</th>
<th>
The consortium partner responsible of the creation
</th> </tr>
<tr>
<td>
**Title**
</td>
<td>
The label of this dataset
</td> </tr>
<tr>
<td>
**File type**
</td>
<td>
The format of the dataset (.xml, .json)
</td> </tr>
<tr>
<td>
**File Size**
</td>
<td>
The size of the dataset
</td> </tr>
<tr>
<td>
**Climate zone**
</td>
<td>
An identifier of the climate zone these data represent
</td> </tr>
<tr>
<td>
**Date period**
</td>
<td>
The date range the data correspond to
</td> </tr>
<tr>
<td>
**Original Source**
</td>
<td>
The web service provider of the data
</td> </tr>
<tr>
<td>
**Info**
</td>
<td>
Information of the dataset in case any clarification is necessary.
</td> </tr> </table>
No versioning provisions are to be made since the data relates to a specific
time period and will not change in the future.
Making data openly accessible
Not applicable (Confidential data)
Making data interoperable
Not applicable (Confidential data)
Increase data re-use (through clarifying licenses)
Not applicable (Confidential data)
**ALLOCATION OF RESOURCES**
No additional cost is required for this dataset as it is included in the
person months cost of WP4.
### DATA SECURITY
All of the data is hosted in the projects private server for safekeeping.
Periodical backup mechanisms are established to ensure data recovery.
**ETHICAL ASPECTS**
Not applicable (Confidential data)
**OTHER**
## Dataset No 9: Agricultural Data (Data module)
### Data Summary
The purpose of this dataset is to provide GATES with historical agricultural
data from all over Europe for the simulation of the agricultural conditions.
The data were classified according to the European climate zones and provided
information regarding soil, yield and pest infestation. The source of the soil
data come from online databases, where, for yield and pest infestation data, a
research will be conducted by the consortium.
Specifically, the consortium after careful consideration decided to use
_https://www.soilgrids.org/_ and _http://www.fao.org/faostat/en/#data_ t o
query and retrieve Agricultural data from all over Europe. More information on
the methodology and general approach can be found in deliverable “D2.2 Report
in GATES Modelling and Simulation Algorithms”.
No existing data was re-used, where, this dataset is stored in the projects
private server in excel files (.xlsx) containing numerous information from all
over the EU such as: area harvested (hectares), yield (hg/ha), production
(tonnes), pesticide usage and many more. This dataset is approximately _168Mb_
of raw data.
### FAIR DATA
#### Making data findable, including provisions for metadata
Not applicable (Confidential data) Making data openly accessible
Not applicable (Confidential data)
Making data interoperable
Not applicable (Confidential data)
Increase data re-use (through clarifying licenses)
Not applicable (Confidential data)
**ALLOCATION OF RESOURCES**
No additional cost is required for this dataset as it is included in the
person months cost of WP4.
### DATA SECURITY
All of the data will be hosted in the projects private server for safekeeping.
Periodical backup mechanisms will be established to ensure data recovery.
**ETHICAL ASPECTS**
Not applicable (Confidential data)
**OTHER**
Not applicable (Confidential data)
## Dataset No 10: GATES experiments data
### Data Summary
In the lifespan of the GATES project, three game experiments were conducted in
three pilot countries. The data collected during these experiments will be the
content of this dataset. Its purpose is to validate the game version at hand
and to prove that a) players enjoy the core gameplay of the specific version
and b) that players can understand the game mechanics.
No existing data was reused and the new data that was collected, came from the
players that provided feedback on their gaming experience during the
experiment. The format of the data is in Excel format and this particular
dataset was extremely useful to the game development partner and in the
consortium in general as to validate the game in terms of achieving the
desired goals with a live audience.
Specifically, the questionnaire that was answered by the test players during
the three validation cycles resulted in the following excel files:
* “ _GATES - First validation cycle-results-survey189595.xlsx_ ” approximately 28Kb of size that contains 85 responses from the players
* “GATES - Second validation cycle-results-survey.xlsx” approximately 65Kb of size that contains 217 responses from the players
* “GATES - Third validation cycle-results-survey.xlsx” approximately 68Kb of size that contains 224 responses from the players
### FAIR DATA
#### Making data findable, including provisions for metadata
This dataset is part of the inner-validation process of the GATES game and is
restricted and accessible only by the members of the consortium. It is stored
in the projects private server which is accessible by all the member of the
consortium.
There is no need for versioning control as it is not possible to change after
the collection and no need for metadata as well. Additionally, the naming
convention of the files will be decided during the experiments design process,
where the file formats will be decided as well.
Making data openly accessible
Not applicable (Confidential data)
Making data interoperable
Not applicable (Confidential data)
Increase data re-use (through clarifying licenses)
Not applicable (Confidential data)
**ALLOCATION OF RESOURCES**
No additional cost is required for this dataset as it is included in the
person months cost of WP4.
### DATA SECURITY
All of the data will be hosted in the projects private server for safekeeping.
Periodical backup mechanisms will be established to ensure data recovery.
**ETHICAL ASPECTS**
Not applicable (Confidential data)
**OTHER**
## Dataset No 11: GATES market data
### Data Summary
The purpose of this dataset is to assist the consortium to better identify the
countries in which market entry will be more straightforward, leading to a
selection of 5 target countries, whose markets will be addressed in the
business plan. The source of this dataset is the work performed by the partner
“INI”, which conducted a market research focusing in 10 EU countries including
the most mature SFT markets of Northern Europe.
Several market criteria were processed and analyzed including (among many
others):
* Distribution of territory by type of region
* GDP per capita by type of region. Eurostat 2013
* Agricultural land use in EU countries
* Agriculture factor income and Gross Fixed capital
* Production of cereals in EU-28
* Production of fruit and vegetables in the targeted countries
* Production of grapes in the EU-28
* Farm Economic, Physical and Labour size for the EU-28
* Regular farm labor force, in persons
* Demographic statistics for Farm managers by age
* Farm training statistics
* Farmer gender statistics
* Internet Access statistics
* SFT acceptance rate in EU
* Business model for Serious Game
* Market acceptance research
* Game-based learning growth rates
* Game-based learning by Region, Country and Educational Game Type
* Research on available serious games in agriculture (direct competitor analysis)
* Age gap in farmers (ratio of young to old farmers)
* Knowledge needs of young farmers
* Potential stakeholder analysis
* Farmer association analysis in the EU
All of the information that was collected and thoroughly analyzed by INI is
well documented in the 6.6 deliverable which has been submitted to the SyGMa
platform. This dataset will be very useful to the consortium to design the
optimum path to reach the users.
### FAIR DATA
#### Making data findable, including provisions for metadata
This dataset is collected from open sources but the analyzation is considered
confidential as part of the commercial project that GATES is. In extend, this
dataset is confidential and restricted to the members of the consortium only.
Making data openly accessible
Not applicable (Confidential data)
Making data interoperable
Not applicable (Confidential data)
Increase data re-use (through clarifying licenses)
Not applicable (Confidential data)
**ALLOCATION OF RESOURCES**
No additional cost is required for this dataset as it is included in the
person months cost of WP4.
### DATA SECURITY
All of the data is be hosted in the projects private server for safekeeping.
Periodical backup mechanisms will be established to ensure data recovery.
**ETHICAL ASPECTS**
Not applicable (Confidential data)
**OTHER**
Not applicable (Confidential data)
## Support Package
The support package includes a list of recommendations that should be applied
for the project in general and specifically for each partner, according to its
role into the consortium.
### General GATES recommendations
* To prepare the Data Management Plan within the first six months of the project and update the report during the lifetime of the project when significant changes will happen (e.g. new data types to be included, changes in the consortium).
* To deposit the project’s scientific publications to Open Access Journal repository, partners’ institutional repository, GATES platform and/or OpenAIRE’s zenodo repository.
* Project partners to add the appropriate acknowledgement text to the scientific publications, related to GATES outcomes.
* Project partners to use the public Creative Commons open access licensing schema (e.g. CC BY 4.0) where applicable.
* To set up the workflow process for uploading and publishing content into the GATES platform.
* To set up the quality assurance process for publishing the GATES platform content. To define the information schema for the GATES platform content.
### Recommendations for AUA partner (GATES coordinator)
* To prepare the Data Management Plan within the first six months of the project and update the report during the lifetime of the project when significant changes will be happened (e.g. new data types to be included, changes in the consortium).
* To be responsible for the overall day-to-day financial and administrative management and coordination of the project.
* To act as a liaison between the Consortium and the European Commission; prepare and ensure the timely delivery of all required reports, deliverables, and financial statements to the European Commission.
* To deposit the AUA’s publications and associated metadata related to GATES outcomes into Open Access Journal repository or institutional repository (where applicable).
* To guide project partners to add the appropriate acknowledgement text to the scientific publications, related to GATES outcomes. This information should be included into specific publication section, as well into the metadata fields.
* To encourage project partners to use the open access licensing schema (CC BY 4.0) for the publications and reports related to GATES outcomes (if there are no restrictions).
* To upload the information related to smart farming technologies into the GATES platform with a link directly to the original source.
* As an alternative, the project publications could be stored into the OpenAIRE giving the ability to third parties to access, mine, exploit, reproduce and disseminate them.
* To create a library about SFT and serve it to the consortium member to be used both in-game and in the web portal as informational material.
* To upload descriptions of the AUA’s publications for smart farming technologies, as new records into the GATES platform with link to the original source (open access journal repository and/or institutional repository).
* To decide about the validation process for ensuring the high quality of the GATES platform content, e.g.
the partners’ role, quality assurance criteria for the uploaded content, which
uploaded content will be published etc.
* To coordinate the licensing schema that should be applied for the GATES platform content. Most preferred licensing schema is the Creative Commons CC-BY 4.0 that prohibit the system users to re-use and modify the SFTs information without the previous attribution to the source.
* To prepare the support package for project partners with guidelines on how to generate, collect and disseminate the project content.
* With explicit agreement by the Commission, other open access costs related to membership to a journal for publishing in open access or as a pre-condition for lower article processing charges could be explored (Project Officer could assist on that).
* To distribute the questionnaire to 100 students in the Agricultural University of Athens and provide the feedback to the consortium.
* To collect data from scientific publications related to smart farming technologies.
* To be responsible for the agenda and minutes of the meetings and to organize telephone or video conferences.
* To be responsible for the decisions regarding the strategic orientation of the project (strategy, progress, major project revisions, collaboration with other projects, dissemination, etc).
* To prepare and deliver the Consortium Agreement (prior to Grant Agreement).
* To set up a working protocol for the project partners to ensure smooth communication throughout the duration of the project.
* To set up a web-based tool (intranet in project’s platform) to facilitate internal communication on project activities and the safe exchange of documents.
* To implement financial sheets to facilitate the financial reporting and monitoring of the project
* To transfer the EU Financial Contribution received to the participants according to the budget agreed with the Project Officer and within the timeframe.
* To organize the periodic consortium meetings with the support of the hosting partner.
* To prepare the agendas and meetings of the General Assembly and ensure that decisions are made and properly implemented.
* To ensure that each WP is implemented according to the tasks and time schedule stated in the Grant Agreement and that results fit the stated objectives.
* To ensure frequent interactions with the WP leaders (through the MST) and the General Assembly.
* To ensure the overall integration of all WP activities. To promote gender equality within the project.
* To define the learning methodology of GATES.
* To create the algorithms and models for the mechanics of the game.
* To contribute on the GAME design development by creating the Storyboarding of GATES.
* To cooperate with INO on the user requirements and competencies.
* To cooperate with INO on setting up the game experiments.
* To cooperate with INO on applying lean game development.
* To cooperate with INO on running the experiments and the evaluation of the results.
* To cooperate with INI on creating the dissemination and communication plan.
* To cooperate with INI on the identification of the market and the creation of the business model.
* To cooperate with MP on game design and interfaces definition.
### Recommendations for InoSens partner
* To deposit the InoSens’s publications and associated metadata related to GATES outcomes into Open Access Journal repository or institutional repository (when applicable based on restriction policy).
* To design the questionnaire for the user requirements, distribute it between the partners and educate them.
* To distribute the questionnaire to 100 farmers in Serbia to fill and provide the feedback to the consortium. To provide to the partners the overall methodology for the validation process of GATES.
* To provide the consolidated recommendation reports coming up from each of the three iterations of the validation process on Serbia, Spain and Greece.
* To assist on the game design process, ensuring that educational contents and game interplay and modes fit the user requirements and game design agree.
* To setup the game experiments in the three pilot countries and run them in cooperation with the corresponding partners.
* To cooperate with AUA on the Material collection, classification and evaluation process.
* To cooperate with AUA on the learning methodology definition.
* To cooperate with MP on the development of the game backend and the minimum viable game (MVG) versions and the extended features.
### Recommendations for MP partner
* To propose the overall game design and interfaces and upload them into the private server.
* To support INO partner on the design of storyboard of the general game and each different modules.
* To develop and share with the partners the GATES minimum viable game (MVG).
* To develop and share with the partners the GATES Android and iOS versions of the game.
* To perform live tests of the game when it reaches its final state.
* To provide post-production and balancing, bugfixing, polishing and further improvements of the game.
* To develop a 3D environment that will simulate training with the realistic interfaces of the SFT.
* To construct the scenario creation mechanism that will allow users to customize their training experience.
* To develop the Statistics Module, which will project complex information in a easy to comprehend manner.
* To provide social features for players interaction through social platforms (i.e. Facebook, Linkedin).
* To cooperate with INO partner on setting up the game experiments.
* To cooperate with INO partner on applying lean game development.
* To cooperate with INI partner on creating the dissemination and communication plan.
* To cooperate with INI partner on the identification of the market and the creation of the business model.
### Recommendations for Iniciativas Innovadoras partner (INI)
* To deposit the INI’s publications and associated metadata related to GATES outcomes into Open Access Journal repository or institutional repository (when applicable based on restriction policy).
* To distribute the questionnaire for the user requirements to agricultural audience (farmers, agronomists) and provide the consortium with the results.
* To create and upload the Dissemination & Exploitation Strategy & Plan to the web-portal and the projects private server.
* To create and upload the Dissemination & Exploitation Reports to the web-portal and the projects private server.
* To create and upload the dissemination material to the web-portal and the projects private server.
* To create and manage the GATES User Group Community.
* To collaborate with the coordinator (AUA) on organizing the final dissemination event in Greece.
* To document and upload the public deliverables of WP6 (Communication, Dissemination & Exploitation) to the web portal.
* To develop the business plan of GATES and present it to the partners.
* To provide partners with the dissemination material they will use to promote GATES.
* To conduct a market research and present the results to the members of the consortium.
### Recommendations for ANSEMAT partner
* To deposit the ANSEMAT’s publications and associated metadata related to GATES outcomes into Open Access Journal repository or institutional repository (when applicable based on restriction policy).
* To distribute the questionnaire for the user requirements 20 company representatives and 20 SFT specialists and provide the consortium with the results.
* To assist on the GAME design development in creation of GATES Storyboarding.
* To assist on the creation of a library about SFT and serve it to the consortium members to be used both in-game and in the web portal as informational material.
* To disseminate the project in events and meetings, mostly in the SFT and PA fields.
* To create a target group of technical experts from its member associates to provide feedback to the developers on the required elements of the game.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1237_OPENREQ_732463.md
|
**1\. Introduction**
After summarising the data management strategies of each partner, the
deliverable describes the handling of the different types of OpenReq data. The
common structure used to describe data handling in the specific work packages
is the following:
* Working data collection (i.e. data collected by the WPs in order to develop the platform and prepare the surveys)
* Data collected from users and organizations during the trials
* Data related to the OpenReq Knowledge Base
OpenReq partners currently do not have background data that cannot be made
public, except Siemens that wants to protect business data of previous or
current projects. All partners reserve the right to protect some additional
background data if the need arises as the project progresses, after informing
the PO about this change
This deliverable is a dynamic document that will be updated if needed during
the execution of the project
**2\. Data Management Strategies of OpenReq partners**
**HITEC**
HITEC will ensure that any research that involves personal data processing is
compliant to the German Data Protection Law through the legal department of
the University of Hamburg. We will consult the Ethical Committee of the
University of Hamburg’s Department of Informatics 1 to check that the
research being done within OpenReq complies with ethical and legal
requirements.
In addition, we can also consult the Data Protection Officer 2 to ensure
compliance with the Hamburgische Datenschutzgesetz ( _HmbDSG_ ) 2 .
Finally, we will use our long-standing collaboration to key research
institutes in ICT law and ethics (in particular with KU-Leuven and Humboldt
Institute for Internet and Society) to gather early feedback and discuss
potential issues and possible solutions.
The data collected from the studies will be anonymized and stored at local
servers managed by the computing and data center of the Department of
Informatics 3 .
**TU Graz**
TU Graz will conduct qualitative and empirical studies with typical adult
populations. The study participants will provide informed consent before
starting the study. Participant information sheets will be produced for each
study detailing what the study will involve, and participants will be given
time to reflect on whether they want to take part and provided with an
opportunity to ask for clarification. Explicit consent will be collected for
each study (in terms of a signed statement or other types of consent in online
studies), which will cover consent to participate and consent for data to be
gathered, stored, and shared (in anonymized form).
Ethics approval at TU Graz will be sought for those studies that require this
from the University's Commission for Scientific Integrity and Ethics. Data
will be collected, analysed, anonymized and stored according to the TU Graz
Datenschutzordnung and the Austrian Datenschutzgesetzes 2000 (DSG 2000), BGBl
I Nr. 165/1999.
The data collected from all user studies conducted at TU Graz will be stored
in an anonymized form such that there is no chance for inferences to the study
participants. All user studies will be stored on the basis of the server
infrastructure of TU Graz which follows the latest security standards. Backups
of all the data are generated automatically on a daily basis.
Details related to the way data is gathered and with whom it is shared are
provided in the work package related subsections of the document. All
collected data will be stored in a secure TU Graz OpenReq data storage.
**ENG**
The research performed by ENG involves qualitative and empirical studies with
typical adult populations. The participants will be able to give informed
consent for themselves. Participant information sheets will be produced for
each study detailing what the study will involve, and participants will be
given time to reflect on whether they want to take part and provided with an
opportunity to ask for clarification. Written consent will be obtained for
each study, which will cover consent to participate and consent for data to be
gathered, stored, and shared (in anonymized form)
Engineering will ensure that any research that involves personal data
processing is compliant to the Italian Data Protection Law.
We will consult the Privacy and Legal Committee of Engineering to check that
the research being done within OpenReq complies with ethical and legal
requirements.
**UPC**
The UPC, according to the Spanish Organic Law 15/1999 of 13 December on
protection of personal data (LOPD), Royal Decree approving the regulation
implementing the Organic Law on Protection of personal data, and Law 30/2007
of 30 October on Public Sector Contracts, and the Directive 95/46/EC of the
European Parliament and of the Council of 24 October 1995 on the protection of
individual, and after the 25 May 2018, the General Data Protection Regulation
EU2016/679, adopted in April 2016, with regard to the processing of personal
data and on the free movement of such data agree, that the data collection
and/or processing in these studies will generally not involve personal data as
defined in applicable international, EU and national law. In case the studies
require the treatment of personal data, the data protection department of UPC
will be consulted.
The research done by UPC will involve qualitative and empirical studies with
typical adult populations. The participants will be able to give informed
consent for themselves. A document will be produced for each study detailing
what the study will involve. This document will allow participants to decide
whether they want to take part in the study, having the opportunity to ask for
clarifications. Depending on the type of study, at the end of it each
participant will have the opportunity to review the results extracted from his
participation, to make sure that the results reflect the participant, and he
will have the opportunity to make modifications or clarifications if needed.
The data results will also be anonymized.
The data gained out of the studies conducted by UPC will be stored in an
anonymized form such that there study participants’ identity cannot be traced
back. All data related to the studies will be placed on the infrastructure of
UPC, which follows the latest security rules and should therefore be secure
against attacks. Backups of all the data are generated automatically twice a
day.
The legal department of UPC will check that the research being done by UPC
within OpenReq complies with legal requirements.
**VOGELLA**
Vogella will gather information from the Eclipse bug tracking system Bugzilla
in order to identify, classify and prioritize requirements for the Eclipse IDE
development. A questionnaire will be created and distributed to major Eclipse
developers in order to find out which criteria are relevant to them when
setting the priority of requirements and accepting requirements for the next
release.
**SIEMENS**
Data processing by Siemens in the context of this project will involve only
Siemens internal data, e.g. data about bid projects. All data processing is
conducted in compliance with Austrian law. If necessary (e.g. in case of data
related to employees which is currently not planned), the respective
authorities such as staff council will be consulted.
**UH**
This project involves qualitative and empirical studies. The participants will
be able to give informed consent for themselves. Participants will be given
time to reflect on whether they want to take part and about recording and
storing data. They will be provided with an opportunity for clarification.
The research data will be stored at local servers maintained by the “IT for
research” unit, which provides research infrastructure for the faculty of
Natural Sciences and ensures that data protection and security features convey
the local laws and highest standards.
**QT**
The Qt Company will gather and make available a dataset of their Jira issue
tracking system. The data will be used to identify, classify and prioritize
requirements extracted from different projects maintained by the company (see
_https://bugreports.qt.io_ ) . The acquisition of data regarding commercial
projects is currently under negotiation in order to guarantee ownership and
anonymity of the data.
**WINDTRE**
This work package involves qualitative and empirical studies with typical
adult populations. The participants will be able to give informed consent for
themselves. Participant information sheets will be produced for each study
detailing what the study will involve, and participants will be given time to
reflect on whether they want to take part and provided with an opportunity to
ask for clarification. Written consent will be obtained for each study, which
will cover consent to participate and consent for data to be gathered, stored,
and shared (in anonymized form)
WindTre will ensure that any research that involves personal data processing
is compliant to the Italian Data Protection Law.
We will consult the Privacy and Legal Committee of WindTre to check that the
research being done within OpenReq complies with ethical and legal
requirements.
**3\. Data Management Aspects of OpenReq Workpackages**
**WP 1 OpenReq Conceptual Framework**
The following table describes studies and related data collected for analysis
purposes. It also includes references to OpenReq trial partners and a short
description of trial partner data used in order to achieve the goals of WP1.
The usage of this data for the purposes of documenting OpenReq results in the
OpenReq knowledge base is described thereafter.
## Planned studies & resulting data
<table>
<tr>
<th>
**WP Task**
</th>
<th>
**Study description**
</th> </tr>
<tr>
<td>
Task 1.1 State of the art: Continuous monitoring and documentation
</td>
<td>
Studies regarding state of the art and open issues in recommender systems for
requirements engineering.
**Resulting Data:** Anonymous feedback from industry on the state of the art
of recommender systems in RE.
</td> </tr>
<tr>
<td>
Task 1.2 Requirements engineering for
OpenReq
</td>
<td>
Studies regarding the requirements of the OpenReq trials mainly based on on-
site interviews.
**Resulting data:** Anonymized summaries of the interviews conducted in the
trials (confidential information from trial partners will be anonymized).
</td> </tr>
<tr>
<td>
Task 1.3 OpenReq platform architecture
</td>
<td>
No user studies and data collection are planned in this context.
</td> </tr>
<tr>
<td>
Task 1.4 Specification of technological and project standards
</td>
<td>
No user studies and data collection are planned in this context.
</td> </tr>
<tr>
<td>
Task 1.5 Project infrastructure for integration, testing, and deployment
</td>
<td>
No user studies and data collection are planned in this context.
</td> </tr>
<tr>
<td>
Task 1.6 OpenReq process & methodology
</td>
<td>
Studies regarding state of the art and open issues in recommender systems for
requirements engineering with a special focus on the impact of AI technologies
on requirements engineering processes.
**Resulting Data:** Anonymous industry feedback on the related state of the
art.
</td> </tr> </table>
## OpenReq data used in OpenReq knowledge base
Scientific publications representing outcomes of the mentioned studies and
beyond are stored in the OpenReq knowledge base and are publicly accessible.
The accessibility of deliverables (related to WP1) will follow the policy
defined in the OpenReq proposal.
Technical reports will be publicly accessible except that partners define a
different policy (which is the exception of the rule).
Source Code will be made publicly accessible always taking into account the
policy defined in the OpenReq proposal. This topic is still under discussion
by the impact committee. Further details can be found in deliverable D8.4.
**WP 2 Software Requirements Intelligence**
The following table describes the data collected for analysis purposes which
are used to develop the OpenReq platforms and algorithms in order to achieve
the goals of WP2. The usage of this data for the purposes of documenting
OpenReq results in the OpenReq knowledge base is described thereafter.
**Planned activities & resulting data **
<table>
<tr>
<th>
**WP Task**
</th>
<th>
**Activity description**
</th> </tr>
<tr>
<td>
Task 2.1 Design analytics & requirements intelligence approach
</td>
<td>
No user studies and data collection are planned in this context. However, the
approach will be based on the activities presented in Task 2.2 and Task 2.3
</td> </tr>
<tr>
<td>
Task 2.2 Collection and analysis of explicit feedback
</td>
<td>
The data gathered from social media websites (including metadata) necessary
for this task will be collected, anonymized, stored and analyzed in accordance
with the guidelines of the sources and following the recommendation of the
German Data Protection Law. When applicable (e.g., for the case of explicit
data from OpenReq trial partners bug tracking systems) we will follow the
partners’ policy regarding data management.
Text mining and natural language processing algorithms will be applied to a
specific sample of the overall population of publicly available user-generated
social media content (e.g., content referring to one or more of the OpenReq
trial partners in a given timeframe)
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
**Resulting Data:** the collection of usergenerated social media content will
result in a dataset stored in a way that could be eventually shared following
the guidelines mentioned above.
The analysis will result in a set of anonymized user needs (with case-specific
metadata attached to it), and an anonymous characterization of the users
</th> </tr>
<tr>
<td>
Task 2.3 Collection and analysis of implicit feedback (usage data)
</td>
<td>
_Ad Hoc_ developed sensors will collect patterns of usage and context from the
single user. To that end existing service (e.g., cloud services) will be
leveraged; therefore, the
relative policies will be followed (for example
_https://developers.google.com/terms/_ or
_https://developer.apple.com/programs/terms/_ _apple_developer_agreement.pdf_
)
The data will be collected and analyzed following the recommendation offered
by the MUSES and FASTFIX ERC-FP7 projects, and European Commission
(http://eur-lex.europa.eu/legal-
content/en/TXT/?uri=CELEX:31995L0046)
**Resulting Data:** the collection of usage data will result in an anonymized
dataset which will be stored and eventually shared in accordance with the
guidelines presented above.
The analysis will result in a set of anonymized set of rules and patterns
</td> </tr>
<tr>
<td>
Task 2.4 Analytics Backend
</td>
<td>
Based on the continuous collection of structured implicit and explicit
feedback from the components developed in T2.2 and T2.3
We will use our design & requirements intelligence approach to learn from the
feedback data. In particular, text-mining algorithms will allow analysis of
natural language texts, such as text-based documents or user feedback. We will
generate sets of training data to test the
</td> </tr>
<tr>
<td>
</td>
<td>
performance of the algorithms selected in T2.1 with regards to accuracy and
performance.
So we will use data collected in the previous steps and data from the trials
(i.e. from Wind Tre Trial), and we will inherit all the policies for privacy
issues of both.
</td> </tr>
<tr>
<td>
Task 2.5 Interactive visualization of requirements data
</td>
<td>
Stakeholders need interactive visualizations of descriptive and predictive
analytics data to inform insights about requirements decisions. We will
implement corresponding methods and algorithms.
So we will use data collected in the previous steps and data from the trials
(i.e. from Wind Tre Trial), and we will inherit all the policies for privacy
issues of both. If the visualization of data involves personal data usage, we
will anonymize the data.
</td> </tr>
<tr>
<td>
Task 2.6 Integration & refinement of requirements intelligence components
</td>
<td>
As for the previous task
</td> </tr> </table>
## OpenReq data used in OpenReq knowledge base
Scientific publications representing outcomes of the mentioned user studies
and beyond are stored in the OpenReq knowledge base and are publicly
accessible through preprint services (such as arXiv) or directly on the
Knowledge base.
The dataset (or a sample thereof) will be made accessible when possible by
taking into account the policy defined by the partners in the OpenReq proposal
see section _Data Management Strategies of OpenReq Partners_ of this
document).
The accessibility of deliverables (related to WP2) will follow the policy
defined in the OpenReq proposal.
Technical reports will be publicly accessible except that partners define a
different policy (which is the exception of the rule).
Source Code will be made publicly accessible always taking into account the
policy defined in the OpenReq proposal.
**WP 3 Personal Recommendations for Stakeholders**
The following table describes user studies and related data collected for
analysis purposes which are used to develop the OpenReq platforms and
algorithms and publish corresponding research results. This table could be
updated in the future. It also includes references to OpenReq trial partners
and a short description of trial partner data used in order to achieve the
goals of WP3. The usage of this data for the purposes of documenting OpenReq
results in the OpenReq knowledge base is described thereafter.
## Planned studies & resulting data
<table>
<tr>
<th>
**WP Task**
</th>
<th>
**Study description**
</th> </tr>
<tr>
<td>
Task 3.1 Design stakeholder recommendation approach
</td>
<td>
No user studies and data collection are planned in this context. The approach
will be based on the activities presented in Task 3.2, Task 3.3, Task 3.4,
Task 3.5 and Task 3.6.
</td> </tr>
<tr>
<td>
Task 3.2 Screening and recommendation of relevant requirements
</td>
<td>
Empirical studies on recommendation algorithms that support requirements
screening and reuse conducted within the scope of software-engineering courses
at TU Graz. Evaluation of the prediction quality of the recommendation
algorithms focusing on how well the algorithms predict relevant requirements
and related artefacts in the current project context.
**Resulting Data:** Information about requirements, their properties and the
performance of the algorithms in anonymized fashion.
</td> </tr>
<tr>
<td>
Task 3.3 Recommendation for improving requirements quality
</td>
<td>
Analysis of the current requirement quality assurance approach from the
OpenReq partners
Analysis of meta-data of requirements provided by the OpenReq trial partners
Knowledge base analysis of the requirement documents provided by the OpenReq
partners
Evaluation of the improvements quality based on the recommendation approach
(e.g., survey/interviews)
**Resulting data:** Information about
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
requirements quality improvements (requirements metadata, properties).
Anonymized data from the evaluation of the proposed requirements improvement
approach.
</th> </tr>
<tr>
<td>
Task 3.4 Predicting requirement properties
</td>
<td>
Social network analysis of networks provided by OpenReq trial partners (in
anonymized fashion).
Requirements documents analysis provided by the trial partners.
Evaluation of the prediction quality of the recommendation algorithms focusing
on the prediction of requirement properties.
Empirical studies on the impact of different recommendation strategies for
predicting requirement properties within the scope of the OpenReq trial
partners.
**Resulting Data:** Information about requirements, their properties and the
performance of the algorithms in anonymized fashion.
</td> </tr>
<tr>
<td>
Task 3.5 Identification and recommendation of relevant stakeholders
</td>
<td>
Social network analysis of networks provided by OpenReq trial partners (in
anonymized fashion).
Requirements documents analysis provided by the trial partners.
Evaluation of the prediction quality of the recommendation algorithms focusing
on the prediction of stakeholders that are assigned to requirements.
Empirical studies on the impact of different recommendation strategies for
predicting stakeholders assigned to requirements within the scope of the
OpenReq trial partners.
**Resulting Data:** Information about requirements, the stakeholders assigned
to them and the performance of the algorithms in anonymized fashion.
</td> </tr>
<tr>
<td>
Task 3.6 Context-aware recommendations for stakeholders
</td>
<td>
Evaluation of the context-observer component that takes into account
contextual information to decide when, what and which way recommendations will
be delivered.
Empirical studies on the impact of the context-observer component within the
scope of the OpenReq trial partners.
**Resulting Data:** Information about the performance and usability of the
contextobserver component.
</td> </tr>
<tr>
<td>
Task 3.7 Integration & refinement of the OpenReq recommender engine
</td>
<td>
Usability Studies on different prototype versions of OpenReq services. These
studies will be conducted with students at UPC and the trial partners and will
be based on questionnaires and prototype evaluations.
**Resulting Data:** Anonymous user feedback on usability questionnaires such
as SUS (System’s Usability Scale).
</td> </tr> </table>
## OpenReq data used in OpenReq knowledge base
The main assets of this work package (assets of the studies such as guides,
protocols and results summary; state-of-the-art results; documents explaining
the work in this work package; etc.) will be available in the OpenReq Tuleap
instance and the OpenReq knowledge base, which runs on the technical
infrastructure of HITEC.
The accessibility of deliverables (related to WP3) will follow the policy
defined in the OpenReq proposal.
Technical reports will be publicly accessible except that partners define a
different policy (which is the exception of the rule).
Source Code will be made publicly accessible always taking into account the
policy defined in the OpenReq proposal.
**WP4 Group Decision Support**
The following table describes user studies and related data collected for
analysis purposes which are used to develop the OpenReq platforms and
algorithms and publish corresponding research results. It also includes
references to OpenReq trial partners and a short description of trial partner
data used in order to achieve the goals of WP4. The usage of this data for the
purposes of documenting OpenReq results in the OpenReq knowledge base is
described thereafter.
## Planned studies & resulting data
<table>
<tr>
<th>
**WP Task**
</th>
<th>
**Study description**
</th> </tr>
<tr>
<td>
Usability Studies
</td>
<td>
These studies will be conducted with students at TU Graz, User-Communities of
the trial partners and within the scope of questionnaires and prototype
evaluations and supported by anonymous communities and micro worker platforms.
**Resulting Data:** Anonymous user feedback on usability questionnaires such
as SUS (System’s Usability Scale).
</td> </tr>
<tr>
<td>
Task 4.1 Group decision biases
</td>
<td>
These studies will be conducted within the scope of large software-engineering
courses at TU Graz where students work in groups to develop software. Existing
decision biases in the context of OpenReq trials will be analyzed on the basis
of questionnaires and prototype evaluations.
**Resulting Data:** Information about software-engineering groups, their
properties and performance in anonymized fashion.
</td> </tr>
<tr>
<td>
Task 4.2 E-participation platforms and methodologies for RE group decisions
</td>
<td>
Analysis and studies (surveys, interviews) of existing E-Participation and
E-Democracy Platforms regarding usability and methodologies. Adjustment
methodologies for RE group decisions based on expert interviews.
**Resulting Data:** User Feedback on usability, acceptance of E-Participation
Platforms from surveys and interviews. If existing platforms are used, we
check and ensure that the data is treated according to the OpenReq policies.
</td> </tr>
<tr>
<td>
Task 4.3 Design Group Decision Support approach
</td>
<td>
Usability Studies (see above) related to the usability of the different user
interfaces for group decision support. Empirical studies related to the impact
of group decision support functionalities (for example:
evaluating the quality of decision outcomes,
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
user satisfaction, satisfaction with the group recommendation and
corresponding decision, etc.)
**Resulting Data:** Anonymous user feedback on usability questionnaires such
as SUS. Information about software-engineering groups, their properties and
performance in anonymized fashion including feedback on the mentioned aspects
such as satisfaction with the recommendation etc.
</th> </tr>
<tr>
<td>
Task 4.4 Consensus & decision quality in group decision making
</td>
<td>
Usability Studies (see above) related to the usability of different user
interfaces and visualization concepts for achieving consensus and increasing
the decision quality (for example: in terms of increasing information exchange
between group members / stakeholders, etc.). Empirical studies with anonymous
user communities related to the impact of different aggregation heuristics and
explanations on aspects such as user satisfaction, prediction quality and
decision quality.
**Resulting Data:** Anonymous user feedback
on usability questionnaires on user interfaces and visualization concepts.
Information about software-engineering groups, their properties and
performance in anonymized fashion including information about the predictive
quality of proposed aggregation heuristics.
</td> </tr>
<tr>
<td>
Task 4.5 Recommendation of stakeholders & scheduling of tasks
</td>
<td>
Empirical studies on the impact of different group configuration strategies on
group performance conducted within the scope of software-engineering courses
at TU Graz. Social network analysis of networks provided by OpenReq trial
partners (in anonymized fashion). Evaluation of the prediction quality of the
recommendation algorithms focusing on the identification of relevant
stakeholders.
**Resulting Data:** Information about stakeholders and project team membership
in anonymized fashion.
</td> </tr>
<tr>
<td>
Task 4.6 Integration & refinement of group decision support infrastructure
</td>
<td>
Usability Studies (see above) on different prototype versions of OpenReq
services.
**Resulting Data:** Anonymous user feedback on usability questionnaires such
as SUS.
</td> </tr> </table>
## OpenReq data used in OpenReq knowledge base
Scientific publications representing outcomes of the mentioned user studies
and beyond are stored in the OpenReq knowledge base and are publicly
accessible.
The accessibility of deliverables (related to WP4) will follow the policy
defined in the OpenReq proposal.
Technical reports will be publicly accessible except that partners define a
different policy (which is the exception of the rule).
Source Code will be made publicly accessible always taking into account the
policy defined in the OpenReq proposal.
**WP 5 Knowledge and Dependency Management**
The following table describes user studies and related data collected for
analysis purposes which are used to develop the OpenReq platforms and
algorithms and publish corresponding research results. It also includes
references to OpenReq trial partners and a short description of trial partner
data used in order to achieve the goals of WP5. The usage of this data for the
purposes of documenting OpenReq results in the OpenReq knowledge base is
described thereafter.
WP5 results created by UH are used in other WPs’ user studies that are
described in the respective WPs.
## Planned studies & resulting data
<table>
<tr>
<th>
**WP Task**
</th>
<th>
**Study description**
</th> </tr>
<tr>
<td>
Usability Studies
</td>
<td>
UH’s WP5 work does not include any direct studies with users or otherwise
interaction with users. Constructive research work in which exemplary data
will be utilized.
</td> </tr>
<tr>
<td>
Task 5.1. Design approach for requirements knowledge and patterns
</td>
<td>
UH will perform empirical studies regarding
the industrial state of the practice for representation of requirements and
their management. See also T1.2.
**Resulting Data:** Anonymous description of
the industrial state of the practice in
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
requirements.The data will include a technical approach, architecture and
algorithms for knowledge representation and dependency management components.
</th> </tr>
<tr>
<td>
Task 5.2. Design approach for automated dependency management
</td>
<td>
UH will perform empirical studies regarding state of the art and practice for
management and representation of requirements. See also T1.2.
**Resulting Data:** Anonymous state of the art description of the
requirements, technical approach, algorithms, and architecture of the
knowledge representation and dependency management components.
</td> </tr>
<tr>
<td>
Task 5.3. Dependency extraction from text based requirements
</td>
<td>
The task is based on data provided by other tasks and manages the same data.
In addition, existing requirements data from trial partner’s requirements
management systems is utilized but no user or personal data is included. In
addition, an evaluation of the quality of the dependency extraction algorithms
will be carried out.
**Resulting Data:** Information about requirements dependencies, and the
performance of the algorithms in anonymized fashion.
</td> </tr>
<tr>
<td>
Task 5.4. Development of OpenReq ontologies and patterns
</td>
<td>
No user studies and data collection are planned in this context. The
ontologies and patterns are partially based on and generalized from T1.2 and
trial data of WP7 about application domain. No personal data is included in
ontologies and patterns.
</td> </tr>
<tr>
<td>
Task 5.5. Dependency management, conflict detection and resolution
</td>
<td>
There are no direct studies with users. Trials apply tools provided by Task
5.5. to manage dependencies and conflicts. This may include processing data of
related, identifiable stakeholders. No sensitive personal information is
involved. See WP7 for data management of the trials.
**Resulting Data:**
Analyses including identification of potentially relevant stakeholders related
to
</td> </tr>
<tr>
<td>
</td>
<td>
issues within trials. See WP7.
</td> </tr>
<tr>
<td>
Task 5.6. Integration and refinement of requirements knowledge and dependency
components
</td>
<td>
Technical testing of integration is carried out but the components do not
include directly any user interaction or personal data.
</td> </tr> </table>
## OpenReq data used in OpenReq knowledge base
Scientific publications representing outcomes of the mentioned user studies
and beyond are stored in the OpenReq knowledge base and are publicly
accessible.
The accessibility of deliverables (related to WP5) will follow the policy
defined in the OpenReq proposal.
Technical reports will be publicly accessible except that partners define a
different policy (which is the exception of the rule).
Source Code will be made publicly accessible always taking into account the
policy defined in the OpenReq proposal.
**WP 6 OpenReq Interfaces**
The following table describes user studies and related data collected for
analysis purposes which are used to develop the OpenReq platforms and
algorithms and publish corresponding research results. The usage of this data
for the purposes of documenting OpenReq results in the OpenReq knowledge base
is described thereafter.
## Planned studies & resulting data
<table>
<tr>
<th>
**WP Task**
</th>
<th>
**Study description**
</th> </tr>
<tr>
<td>
Task 6.1 Design OpenReq user interface approach
</td>
<td>
Evaluation of the OpenReq User Interfaces by means of Usability Studies. These
studies will be conducted with students at TU Graz, User-Communities of the
trial partners and within the scope of questionnaires and prototype
evaluations and supported by anonymous communities and micro worker platforms.
**Resulting Data:** Anonymous user feedback on usability questionnaires such
as SUS.
</td> </tr>
<tr>
<td>
Task 6.2 Continuous integration
</td>
<td>
No user studies and data collection are planned in this context.
</td> </tr>
<tr>
<td>
Task 6.3 Development of the OpenReq integrated version and API
</td>
<td>
No user studies and data collection are planned in this context.
</td> </tr>
<tr>
<td>
Task 6.4 Integration of OpenReq in issue trackers & collaboration tools
</td>
<td>
No user studies and data collection are planned in this context.
</td> </tr>
<tr>
<td>
Task 6.5 OpenReq cloud platform and services
</td>
<td>
No user studies and data collection are planned in this context.
</td> </tr>
<tr>
<td>
Task 6.6 Integration of OpenReq in requirements tools
</td>
<td>
No user studies and data collection are planned in this context (usability
studies are covered in Task 7.3).
</td> </tr>
<tr>
<td>
Task 6.7 Open-Call technical supervision
</td>
<td>
No user studies and data collection are planned in this context.
</td> </tr> </table>
## OpenReq data used in OpenReq knowledge base
Scientific publications representing outcomes of the mentioned user studies
and beyond are stored in the OpenReq knowledge base and are publicly
accessible.
The accessibility of deliverables (related to WP6) will follow the policy
defined in the OpenReq proposal.
Technical reports will be publicly accessible except that partners define a
different policy (which is the exception of the rule).
Source Code will be made publicly accessible always taking into account the
policy defined in the OpenReq proposal.
Running prototypes hosted in Engineering premises and potentially containing
data from knowledge base will not be publicly available. All such data will be
hosted only in Engineering data center located in Europe, and will not be
accessible from outside, unless otherwise agreed with the consortium.
All data stored in the Virtual Machine used to host OpenREQ cloud services
will be protected by Engineering corporate firewalls and will be backed up
with daily incremental backups and monthly full backups.
**WP 7 Trials & Evaluation **
The following table describes user studies and related data collected for
analysis purposes which are used to evaluate the OpenReq platforms and
algorithms and publish corresponding research results. It includes references
to OpenReq trial partners and a short description of trial partner data. The
usage of this data for the purposes of documenting OpenReq results in the
OpenReq knowledge base is described thereafter.
# Planned studies & resulting data
<table>
<tr>
<th>
**WP Task**
</th>
<th>
**Study description**
</th> </tr>
<tr>
<td>
Task 7.1 Continuous planning of trials & evaluations
</td>
<td>
No user studies and data collection are planned in this context.
</td> </tr>
<tr>
<td>
Task 7.2 Cross-Platform OSS trial: as-is analysis and execution
</td>
<td>
During the evaluation of the OpenReq services, data from the publicly
available issue trackers of the company will be collected, analyzed and
stored.
Usability studies with Qt employees will
involve anonymous surveys and questionnaires.
</td> </tr>
<tr>
<td>
Task 7.3 Transportation trial: as-is analysis and execution
</td>
<td>
During evaluation of OpenReq services, the results of user sessions at Siemens
sites will be collected in anonymized tables and metrics. Usability studies
with Siemens employees will involve anonymous questionnaires.
**Resulting Data:** Anonymous expert decisions during work sessions and
anonymous user feedback on usability questionnaires.
</td> </tr>
<tr>
<td>
Task 7.4 Telecom trial: as-is analysis, execution, and evaluation
</td>
<td>
Usability studies with WindTre employees will involve anonymous
questionnaires.
</td> </tr>
<tr>
<td>
Task 7.5 Open-Call results evaluation
</td>
<td>
The participants in the OpenCall will use the OpenReq platform and its
connectors. Moreover, they will be the subjects of surveys and interviews for
the purpose of evaluation.
This data will serve the purpose of measuring the accuracy and usefulness of
the recommendations given by the OpenReq platform.
The OpenReq sub-contractors which will be involved with the open-call are
diverse-from single or small teams (e.g., in the case of hackathons) to
companies and OSS communities. The different needs for collecting and storing
data from these
</td> </tr>
<tr>
<td>
</td>
<td>
segments will be taken into account.
**Resulting Data:** Information about the usage of the OpenReq platforms and
connectors.
Feedback from surveys and interviews.
</td> </tr>
<tr>
<td>
Task 7.6 User Studies
</td>
<td>
See WP2-6 for details
</td> </tr>
<tr>
<td>
Task 7.7 Summative evaluation of OpenReq platform
</td>
<td>
No user studies and data collection are planned in this context (just a
summary of tasks 7.2 - 7.5).
</td> </tr> </table>
# OpenReq data used in OpenReq knowledge base
Scientific publications representing outcomes of the mentioned user studies
and beyond are stored in the OpenReq knowledge base and are publicly
accessible.
The accessibility of deliverables (related to WP4) will follow the policy
defined in the OpenReq proposal.
Technical reports will be publicly accessible except that partners define a
different policy (which is the exception of the rule).
Source Code will be made publicly accessible always taking into account the
policy defined in the OpenReq proposal.
# WP 8 Communication, Dissemination & Exploitation
WP8 does not include any studies with users or otherwise interaction with
users.
# WP 9 Project and Quality Management
WP9 does not include any studies with users or otherwise interaction with
users.
# Conclusion
This document described how data is managed in the OpenReq project. We want to
point out once more that OpenReq is not managing utterly sensible data and all
partners and WPs have individually committed to put in place data protection
measures as described in the document.
If the development of the project requires it, this deliverable will be
updated.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1240_PhasmaFOOD_732541.md
|
# Data Summary
The main objective of PhasmaFOOD is to design and implement an autonomous,
multifunctional and programmable optical sensing device, integrated with
spectroscopy technologies for food hazard, microbial activity detection and
shelf-life estimation.
The project will configure a dedicated cloud platform for collecting data from
the PhasmaFOOD device and to perform in-depth analysis of the data obtained
from the project’s device. The cloud platform will enable correlation of the
device measurements with spectral analysis results collected from all
connected PhasmaFOOD smart sensing devices. This will provide the opportunity
for detection of trends, patterns and distribution of food contamination which
can help prevent outbreaks and provide recommendation for improving food
safety at different stages of farm-to-fork production chain.
PhasmaFOOD cloud platform will also host sensory and contextual database which
will be used for training data analysis and machine learning models deployed
on smart sensory device and as part of PhasmaFOOD mobile application (data
analytics calibration). The data collected from the miniaturized device will
be forwarded to the cloud where the spectroscopy analysis will take place
using a reference database. The PhasmaFOOD reference database will be built
upon laboratory measurements of the specific foods and qualities supported by
the PhasmaFOOD Use Cases:
* **Use case 1: Detection of mycotoxins in various grains and nuts:** Aflatoxins, a special type of mycotoxins, will be detected. A simple, convenient ultraviolet test makes it possible to detect the possible presence of aflatoxin.
* **Use case 2: Detection of early sign of spoilage and spoilage in fruits, vegetables, meat, fish** : Combined with estimation on product expiration date.
* **Use case 3: Detection of food fraud:** Adulteration of alcoholic beverages, oil, milk and meat.
In Section 1 and Section 3 we provide summary and descriptions, respectively,
of the Datasets that will be produced by the different laboratory experiment
set-ups that will be used to collect reference measurements. These dataset
descriptions provide useful insights into meat-related laboratory experiments
and may be useful in other food security settings outside the project.
Deliverable updates are planned for Months 18 and 36 to reflect the updates in
the data management framework of PhasmaFOOD.
# Datasets: Reference and Name
The PhasnaFOOD project has already identified, and is still further
identifying, a set of data sources that will stem from new and existing
laboratory experiments that will produce the required reference measurements
required by the use case scenarios.
Following an iterative process, data sources are established, encapsulating
the project’s use case requirements. A refinement process is foreseen to
continuously take place throughout the lifetime of the project, as new
laboratory experiments will be deployed to collect additional reference
measurements.
## Naming Methodology
### Folder nomenclature
For each data source there is a folder containing the data source definition
and the data sample. Each folder has a specific name that is composed by
different parts/elements, containing information about:
* Use Case (UC1, UC2, UC3),
* Food & Beverage Type (FBT-<Name>),
* Contamination & Fraud Type (CFT-<Name>),
* Measurement Sensor (MS-<Name>),
* Laboratory (LO-<Organisation short name>).
Based on the above, a data source folder instance example would be:
UC1_FBT-Pork-Meat_CFT-SPOILAGE_MS-FTIR_LO-AU
### Datasets nomenclature
In each folder the files containing the measurement datasets will follow a
similar nomenclature, adding field information required for the further
identification of the datasets. For each dataset, its specific name is
composed by different parts, containing the following elements:
* Use Case (UC1, UC2, UC3),
* Food & Beverage Type (FBT-<Name>),
* Contamination & Fraud Type (CFT-<Name>), Measurement Sensor (MS-<Name>),
* Laboratory (LO-<Organisation short name>),
* Dataset ID (DID-<ID provided by LO>),
* Date Provided (DP-<YYYYMMDD>),
* Dataset File Extension (DFE-<filename extension>).
Based on the above, a dataset instance example would be:
UC1_FBT-Pork-Meat_CFT-SPOILAGE_MS-FTIR_LO-AU_ DID-111_ DP-20170122_ DFE-XLSX
# Dataset Descriptions
## Generic Use Case Data Sources
For the implementation of the PhasmaFOOD use cases’ samples spectral databases
will be built up to hold chemical reference data and used later to benchmark
and validate new samples. The general schema for Data Sources descriptions in
PhasmaFood is provided below.
<table>
<tr>
<th>
Data Source name: Use-case 1, 2 and 3
</th> </tr>
<tr>
<td>
**Data source description**
</td> </tr>
<tr>
<td>
Data, used for use-cases 1, 2 and 3 (see D1.1), populating spectroscopic
databases:
1. UV-Vis data – spectroscopic data
2. NIR data – spectroscopic data
3. Image data – 3D spectroscopic data
Chemical reference data (e.g. nitrogen/moisture determination, GC, LC-MS) to
benchmark the samples in use-cases 1, 2, and 3.
</td> </tr>
<tr>
<td>
**Dataset entities**
</td>
<td>
Samples used in use-cases 1, 2 and 3 for spectral data base building (WP3) and
validation (WP6)
</td> </tr>
<tr>
<td>
**Dataset attributes**
</td>
<td>
Unknown, pilot equipment is not available so far
</td> </tr>
<tr>
<td>
**Data type**
</td>
<td>
Unknown so far, probably .csv or excel formats
</td> </tr>
<tr>
<td>
**Standard**
</td>
<td>
NA
</td> </tr>
<tr>
<td>
**Direct data URI**
</td>
<td>
Unknown so far
</td> </tr>
<tr>
<td>
**Data Size**
</td>
<td>
Unknown so far
</td> </tr>
<tr>
<td>
**Sample size**
</td>
<td>
Unknown so far
</td> </tr>
<tr>
<td>
**Data lifetime**
</td>
<td>
WP 3 – till M 27, WP 6 till M36
</td> </tr>
<tr>
<td>
**Availability**
</td>
<td>
Only available to consortium
</td> </tr>
<tr>
<td>
**Data collection frequency**
</td>
<td>
On demand, when pilot equipment is available
</td> </tr>
<tr>
<td>
**Data quality**
</td>
<td>
Unknown so far
</td> </tr>
<tr>
<td>
**Raw data sample**
</td> </tr>
<tr>
<td>
NA
</td> </tr>
<tr>
<td>
**Print screen (if possible)**
</td> </tr>
<tr>
<td>
(Print screen for a data sample, if not in text format; e.g. Excel Sheet) NA
</td> </tr> </table>
<table>
<tr>
<th>
**Field Descriptions (Description of the fields within the data, whenever
possible; e.g. JSON keys descriptions, Excel Sheet’s column descriptions,
etc.)**
</th> </tr>
<tr>
<td>
**Field Name**
</td>
<td>
**Field Description**
</td>
<td>
**Type of Data**
</td> </tr>
<tr>
<td>
Sample name
</td>
<td>
Sample name
</td>
<td>
text
</td> </tr>
<tr>
<td>
Reference value
</td>
<td>
Class1/Class2
</td>
<td>
category
</td> </tr>
<tr>
<td>
Wavenumber
</td>
<td>
Wavenumber
</td>
<td>
number
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
In the remaining sections, detailed instances of PhasmaFOOD laboratory
measurements are presented. These will form the basis of the PhasmaFOOD cloud
reference database and will be specifically considered during the
specification (WP1) and design (WP2) project activities.
_NOTE:_ The measuring methods of the PhasmaFOOD demonstrator will include
spectral data (from visible and near-infrared spectroscopy) and image data
(visible camera). The following data sets are examples from FTIR spectroscopy
and multispectral imaging (MSI). Although these exact measuring methods will
not be implemented into the PhasmaFOOD demonstrator, the data format of FTIR
spectra is representative of our visible and NIR spectra, and MSI is used to
demonstrate the data format of our visible images.
## Beef meat spoilage experiment with FTIR
<table>
<tr>
<th>
Data Source name: Beef meat spoilage experiment with FTIR
</th> </tr>
<tr>
<td>
**Data source description**
</td> </tr>
<tr>
<td>
Beef meat spoilage using FTIR spectroscopy. Aerobic storage at chill (0, 5 °C)
and abuse (10, 15, and 20 °C) temperatures. Prediction of the microbial load
on the surface of meat samples directly from FTIR spectral data.
**Dataset entities** Beef meat spoilage
</td> </tr>
<tr>
<td>
**Dataset attributes**
</td>
<td>
FTIR spectral data
</td> </tr>
<tr>
<td>
**Data type**
</td>
<td>
Comma Separated values text files: *.csv
</td> </tr>
<tr>
<td>
**Standard**
</td>
<td>
Csv
</td> </tr>
<tr>
<td>
**Direct data URI**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Data Size**
</td>
<td>
5 (storage temperatures) x [1,7] MB
</td> </tr>
<tr>
<td>
**Sample size**
</td>
<td>
[1,7] MB for each storage temperature. Each csv file is about [70,100] KB.
</td> </tr>
<tr>
<td>
**Data lifetime**
</td>
<td>
Overall period of 350 h
</td> </tr>
<tr>
<td>
**Availability**
</td>
<td>
Upon Request
</td> </tr>
<tr>
<td>
**Data collection frequency**
</td>
<td>
Meat samples stored at 0 and 5 °C were analyzed every 24 h, whereas samples
stored at 10, 15, and 20 °C were analyzed every 8, 6, and 4 h, respectively.
</td> </tr> </table>
<table>
<tr>
<th>
**Data quality**
</th>
<th>
Complete and published: Argyri et al. (2010) 1 Rapid qualitative and
quantitative detection of beef fillets spoilage based on Fourier transform
infrared spectroscopy data and artificial neural networks Sensors and
Actuators B: Chemical 145, 146-154.
</th> </tr>
<tr>
<td>
**Raw data sample**
</td> </tr>
<tr>
<td>
TITLE
DATA TYPE INFRARED SPECTRUM
ORIGIN JASCO OWNER
DATE 13/11/09
TIME 15:15:51
SPECTROMETER/DATA SYSTEM
LOCALE 1033
RESOLUTION
DELTAX 0.964233
XUNITS 1/CM
YUNITS ABSORBANCE
FIRSTX 399.1927
LASTX 4000.6047
NPOINTS 3736
FIRSTY 5
MAXY 5
MINY -1.51932
XYDATA
399.1927 5
400.1569 5
401.1211 5
402.0854 5 403.0496 5
404.0138 5
404.9781 5
405.9423 1.17363
406.9065 0.86412
407.8708 0.715091
</td> </tr>
<tr>
<td>
**Print screen (if possible)**
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
**Field Descriptions (Description of the fields within the data, whenever
possible; e.g. JSON keys descriptions, Excel Sheet’s column descriptions,
etc.)**
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
**Field Name**
</td>
<td>
**Field Description**
</td>
<td>
**Type of Data**
</td> </tr>
<tr>
<td>
XYDATA
</td>
<td>
Spectral data
</td>
<td>
Wavelengths (cm -1 )
</td> </tr>
<tr>
<td>
Second column (next to XYDATA)
</td>
<td>
Spectral data
</td>
<td>
Absorbance at each specific wavelength
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
## Pork meat spoilage experiment with FTIR
<table>
<tr>
<th>
Data Source name: Pork meat spoilage experiment with FTIR
</th> </tr>
<tr>
<td>
**Data source description**
</td> </tr>
<tr>
<td>
Spectral data from FTIR with minced pork meat spoilage during aerobic storage
of meat samples at different storage temperatures (0, 5, 10, and 15 °C).
Prediction of the microbial load on the surface of meat samples directly from
FTIR spectral data.
</td> </tr>
<tr>
<td>
**Dataset entities**
</td>
<td>
Pork minced meat spoilage
</td> </tr>
<tr>
<td>
**Dataset attributes**
</td>
<td>
FTIR spectral data
</td> </tr>
<tr>
<td>
**Data type**
</td>
<td>
Comma Separated values text files: *.csv
</td> </tr>
<tr>
<td>
**Standard**
</td>
<td>
csv
</td> </tr> </table>
<table>
<tr>
<th>
**Direct data URI**
</th>
<th>
N/A
</th> </tr>
<tr>
<td>
**Data Size**
</td>
<td>
4 (storage temperatures) x [1,7] MB.
</td> </tr>
<tr>
<td>
**Sample size**
</td>
<td>
[1,7] MB for each storage temperature. Each csv file is about [70,100] KB.
</td> </tr>
<tr>
<td>
**Data lifetime**
</td>
<td>
Overall period of 350 h
</td> </tr>
<tr>
<td>
**Availability**
</td>
<td>
Upon Request
</td> </tr>
<tr>
<td>
**Data collection frequency**
</td>
<td>
Samples stored at 0 and 5 °C were analyzed approximately every 24 and 12 h,
respectively, whereas samples stored at 10 and 15 °C were analyzed every 6–7
h.
</td> </tr>
<tr>
<td>
**Data quality**
</td>
<td>
Complete and published: Papadopoulou et al. (2011) 2 Contribution of
Fourier transform infrared (FTIR) spectroscopy data on the quantitative
determination of minced pork meat spoilage. Food Research International 44,
3264.
</td> </tr>
<tr>
<td>
**Raw data sample**
</td> </tr>
<tr>
<td>
TITLE
DATA TYPE INFRARED SPECTRUM
ORIGIN JASCO
OWNER
DATE 13/11/09
TIME 15:15:51
SPECTROMETER/DATA SYSTEM
LOCALE 1033
RESOLUTION
DELTAX 0.964233
XUNITS 1/CM
YUNITS ABSORBANCE
FIRSTX 399.1927
LASTX 4000.6047
NPOINTS 3736
FIRSTY 5
MAXY 5
MINY -1.51932
XYDATA
399.1927 5
400.1569 5
401.1211 5
402.0854 5
403.0496 5 404.0138 5
404.9781 5
405.9423 1.17363
406.9065 0.86412
407.8708 0.715091
</td> </tr>
<tr>
<td>
**Print screen (if possible)**
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
**Field Descriptions (Description of the fields within the data, whenever
possible; e.g. JSON keys descriptions, Excel Sheet’s column descriptions,
etc.)**
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
**Field Name**
</td>
<td>
**Field Description**
</td>
<td>
**Type of Data**
</td> </tr>
<tr>
<td>
XYDATA
</td>
<td>
Spectral data
</td>
<td>
Wavelengths (cm -1 )
</td> </tr>
<tr>
<td>
Second column (next to XYDATA)
</td>
<td>
Spectral data
</td>
<td>
Absorbance at each specific wavelength
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
## Beef meat spoilage experiment with MultiSpectral Imaging (MSI)
<table>
<tr>
<th>
Data Source name: Beef meat spoilage experiment with MultiSpectral Imaging
(MSI)
</th> </tr>
<tr>
<td>
**Data source description**
</td> </tr>
<tr>
<td>
Beef meat spoilage using Multispectral imaging. Aerobic storage at 2, 8 and 15
°C, sterile and naturally contaminated samples. Prediction and mapping of the
microbial load on the surface of meat samples directly from Multispectral
Imaging data.
</td> </tr>
<tr>
<td>
**Dataset entities**
</td>
<td>
Beef meat spoilage.
</td> </tr>
<tr>
<td>
**Dataset attributes**
</td>
<td>
Multispectral images. Each sample consists of 18 images captured at 18
different wavelengths.
</td> </tr>
<tr>
<td>
**Data type**
</td>
<td>
*.hips files. Graphic classification file.
</td> </tr>
<tr>
<td>
**Standard**
</td>
<td>
Hips
</td> </tr>
<tr>
<td>
**Direct data URI**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Data Size**
</td>
<td>
(105 for sterile + 114 for naturally contaminated) x 100MB ~= 22 GB
</td> </tr>
<tr>
<td>
**Sample size**
</td>
<td>
~100MB for each sample
</td> </tr>
<tr>
<td>
**Data lifetime**
</td>
<td>
Overall period of 350 h
</td> </tr>
<tr>
<td>
**Availability**
</td>
<td>
Upon Request due to the large size
</td> </tr>
<tr>
<td>
**Data collection frequency**
</td>
<td>
Meat samples stored at 2 and 8 °C were analyzed every 24 h, whereas samples
stored at 15 °C were analyzed every 6 h.
</td> </tr>
<tr>
<td>
**Data quality**
</td>
<td>
Complete and published: Tsakanikas et al (2016) Exploiting multispectral
imaging for non-invasive contamination assessment and mapping of meat samples.
Talanta 161, 604.
_http://www.sciencedirect.com/science/article/pii/S0039914016306889_
</td> </tr>
<tr>
<td>
**Print screen (if possible)**
</td> </tr>
<tr>
<td>
</td> </tr> </table>
## Pork minced meat spoilage experiment with MultiSpectral Imaging (MSI)
<table>
<tr>
<th>
Data Source name: Pork minced meat spoilage experiment with MultiSpectral
Imaging (MSI)
</th> </tr>
<tr>
<td>
**Data source description**
</td> </tr>
<tr>
<td>
Pork minced meat spoilage using Multispectral imaging. Aerobic and MAP
(modified atmospheres packaging) storage at 0, 5, 10, 15 and 20 °C. Prediction
and mapping of the microbial load on the surface of meat samples directly from
Multispectral images data.
</td> </tr>
<tr>
<td>
**Dataset entities**
</td>
<td>
Pork minced meat spoilage.
</td> </tr>
<tr>
<td>
**Dataset attributes**
</td>
<td>
Multispectral images. Each sample consists of 18 images captured at 18
different wavelengths.
</td> </tr>
<tr>
<td>
**Data type**
</td>
<td>
*.hips files. Graphic classification file.
</td> </tr>
<tr>
<td>
**Standard**
</td>
<td>
Hips
</td> </tr>
<tr>
<td>
**Direct data URI**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Data Size**
</td>
<td>
(160 for aerobic storage + 150 for map storage) x 100MB ~= 300 GB
</td> </tr>
<tr>
<td>
**Sample size**
</td>
<td>
~100MB for each sample.
</td> </tr>
<tr>
<td>
**Data lifetime**
</td>
<td>
Overall period of 350 h.
</td> </tr>
<tr>
<td>
**Availability**
</td>
<td>
Upon Request due to the large size.
</td> </tr>
<tr>
<td>
**Data collection frequency**
</td>
<td>
Meat samples stored at 0 and 5 °C were analyzed every 24 h, samples stored at
8 °C were analyzed every 8 h, whereas samples stored at 15 and 20 °C were
analyzed every 5 h.
</td> </tr>
<tr>
<td>
**Data quality**
</td>
<td>
Complete and published: Dissing et al. (2013) Using Multispectral Imaging for
Spoilage Detection of Pork Meat. Food Bioprocess Technol (2013) 6:2268- 2279
3
</td> </tr>
<tr>
<td>
**Print screen**
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
</td> </tr> </table>
## Adulteration experiment Beef & Pork meat with MultiSpectral Imaging (MSI)
<table>
<tr>
<th>
Data Source name: Adulteration experiment Beef & Pork meat with MultiSpectral
Imaging (MSI)
</th> </tr>
<tr>
<td>
**Data source description**
</td> </tr>
<tr>
<td>
Beef and pork-minced meat was mixed in order to achieve nine different
proportions of adulteration and two categories of pure pork and beef.
Detection of minced beef fraudulently substituted with pork and vice versa
using Multispectral imaging.
</td> </tr>
<tr>
<td>
**Dataset entities**
</td>
<td>
Detection of minced beef fraudulently substituted with pork and vice versa
</td> </tr>
<tr>
<td>
**Dataset attributes**
</td>
<td>
Multispectral images. Each sample consists of 18 images captured at 18
different wavelengths.
</td> </tr>
<tr>
<td>
**Data type**
</td>
<td>
*.hips files. Graphic classification file.
</td> </tr>
<tr>
<td>
**Standard**
</td>
<td>
Hips
</td> </tr>
<tr>
<td>
**Direct data URI**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Data Size**
</td>
<td>
220 meat samples in total from four independent experiments (55 samples per
experiment) x 100MB ~= 22 GB
</td> </tr>
<tr>
<td>
**Sample size**
</td>
<td>
~100MB for each sample
</td> </tr>
<tr>
<td>
**Data lifetime**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Availability**
</td>
<td>
Upon Request due to the large size
</td> </tr>
<tr>
<td>
**Data collection frequency**
</td>
<td>
On demand. Storage is not applicable here.
</td> </tr>
<tr>
<td>
**Data quality**
</td>
<td>
Complete and published: Ropodi et al. (2015) Multispectral image analysis
approach to detect adulteration of beef and pork in raw meats. Food Research
International 67, 12-18 4 .
</td> </tr>
<tr>
<td>
**Print screen (if possible)**
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
</td> </tr> </table>
## Adulteration experiment Beef & Horse meat with MultiSpectral Imaging (MSI)
<table>
<tr>
<th>
Data Source name: Adulteration experiment Beef & Horse meat with MultiSpectral
Imaging (MSI)
</th> </tr>
<tr>
<td>
**Data source description**
</td> </tr>
<tr>
<td>
Detection of minced beef adulteration with horsemeat, as well as during
storage in refrigerated conditions. For this reason, multispectral images of
110 samples from three different batches of minced beef and horsemeat in 18
wavelengths were acquired. Images were taken again after samples were stored
at 4 °C for 6, 24 and 48 h.
</td> </tr>
<tr>
<td>
**Dataset entities**
</td>
<td>
Detection of minced beef adulteration with horsemeat.
</td> </tr>
<tr>
<td>
**Dataset attributes**
</td>
<td>
Multispectral images. Each sample consists of 18 images captured at 18
different wavelengths.
</td> </tr>
<tr>
<td>
**Data type**
</td>
<td>
*.hips files. Graphic classification file.
</td> </tr>
<tr>
<td>
**Standard**
</td>
<td>
Hips
</td> </tr>
<tr>
<td>
**Direct data URI**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Data Size**
</td>
<td>
(110 samples from three different batches of minced beef and horsemeat) x
100MB ~= 11 GB
</td> </tr>
<tr>
<td>
**Sample size**
</td>
<td>
~100MB for each sample
</td> </tr>
<tr>
<td>
**Data lifetime**
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Availability**
</td>
<td>
Upon Request due to the large size
</td> </tr>
<tr>
<td>
**Data collection frequency**
</td>
<td>
Sampling at 6, 24 and 48 h of storage at 4°C.
</td> </tr>
<tr>
<td>
**Data quality**
</td>
<td>
Complete and published: Ropodi et al. (2017) Multispectral imaging (MSI): A
promising method for the detection of minced beef adulteration with horsemeat.
Food Control 73, 57-63 5 .
</td> </tr>
<tr>
<td>
**Print screen (if possible)**
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
</td> </tr> </table>
# Standards and Metadata
The food sector is highly characterized by non-centralized sources of dynamic
and heterogeneous data which increase the information about, for example, meat
quality in a particular region, but typically, decrease the effectiveness of
sharing that information across stakeholders. The harmonization and
standardization of data structures and data exchange services are fundamental
challenges for both the information society as a whole, as well as for Food
Security applications.
The main focus of PhasmaFOOD is to provide the required data interoperability
and adaptability in a variety of food safety settings. PhasmaFOOD will seek to
establish liaisons within the food security value chain, with a particular
interest in standardizing the use of food related Open Data in the food
security sector and standardizing the types of food related analytics that can
be sought from big data platforms.
## Rapid alert system for food and feed (RASFF)
Commission Regulation (EU) No. 16/2011 specifies the framework for
implementing measures for the rapid alert system for food and feed (RASFF).
The RASFF notifications are generated from templates that provide guidelines
on how different fields in the notifications are used. In the table below we
provide a sample example of the RASFF fields that are closely related to the
PhasmaFOOD application scenarios. Detailed analysis is provided in the project
specification document D1.2 _“Functional and System Specifications”_ .
__RASFF Fields_ _
<table>
<tr>
<th>
**RASFF Fields**
</th>
<th>
**Explanation**
</th> </tr>
<tr>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**notification classification**
</td>
<td>
Classification of the notification according to the definitions given in
Regulation 16/2011 and to the guidance given in SOP 5.
</td> </tr>
<tr>
<td>
**information source**
</td>
<td>
Specific source of the information contained in the notification if this is
relevant to the understanding of the content of the notification, e.g. a food
control body in a third country or a consumer association.
</td> </tr>
<tr>
<td>
**risk decision**
</td>
<td>
Gives information about the evaluation of the risk:
\- whether the risk is considered to be serious, not serious or undecided; -
motivate: why was the risk evaluated as serious (only to be added when the
evaluation as serious risk is not straight forward).
</td> </tr>
<tr>
<td>
**product category**
</td>
<td>
Choose the product category from one of the two lists (alphabetical order) or
enter it into the other field if the category is not among the entries of the
lists or if there are more than one (for more than one product belonging to
different categories).
</td>
<td>
</td> </tr>
<tr>
<td>
**product name(s) (on label)**
</td>
<td>
Precise product name(s), characterising the product(s), without using any
commercial name; often the product name on the label that can be found on the
packaging.
</td>
<td>
</td> </tr>
<tr>
<td>
**product CN code**
</td>
<td>
Enter the Common Nomenclature code for the product concerned.
</td>
<td>
</td> </tr>
<tr>
<td>
**product aspect**
</td>
<td>
Here you should enter important characteristics of the product such as the
temperature at which it is kept but also e.g. the kind of packaging, etc.
</td>
<td>
</td> </tr>
<tr>
<td>
**sampling dates**
</td>
<td>
6 separate fields are provided for a maximum of 6 separate values to be
entered
</td>
<td>
</td> </tr>
<tr>
<td>
**sampling info**
</td>
<td>
Make a reference to a compulsory sampling methodology or inform about the
circumstances in which the sample was taken (esp. if the sample was taken from
an opened packaging of the product etc.).
</td>
<td>
</td> </tr>
<tr>
<td>
**sampling place**
</td>
<td>
Place where the samples were taken: use the list box provided or the field
other if the place is not among the list entries or to specify the name of the
operator.
</td>
<td>
</td> </tr>
<tr>
<td>
**Analytical method(s)**
</td>
<td>
If a specific analytical method was applied, e.g. one described in legislation
or in an EN or international standard, enter it here.
</td>
<td>
</td> </tr>
<tr>
<td>
**hazards identified**
</td>
<td>
Enter the hazards that were evaluated as non-compliant (according to
legislation or risk evaluation) as a result of the analysis or analyses.
</td>
<td>
</td> </tr> </table>
__RASFF List of values_ _
In RASFF when there is mention of an “open list”, it is a list of entities to
which new entities could be added. The table below outlines the main lists of
values used in RASFF.
<table>
<tr>
<th>
**RASFF Lists of Values**
</th>
<th>
**Explanation**
</th> </tr>
<tr>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**notification type**
</td>
<td>
Food, food contact material, feed
</td> </tr>
<tr>
<td>
**notification classification**
</td>
<td>
Alert notification, border rejection notification, information notification
for Attention, information notification for follow-up, news
</td> </tr>
<tr>
<td>
**product relation**
</td>
<td>
Additional lots, different variety, ingredient, processed product, raw
material
</td> </tr>
<tr>
<td>
**risk decision**
</td>
<td>
Serious, not serious, undecided
</td> </tr>
<tr>
<td>
**impact on**
</td>
<td>
Human health, animal health, environment
</td> </tr>
<tr>
<td>
**unit weight/volume**
</td>
<td>
Closed list of units for weight/volume: g, kg, l, ml
</td> </tr>
<tr>
<td>
**temperature**
</td>
<td>
Ambient, chilled, frozen
</td> </tr>
<tr>
<td>
**hazard**
</td>
<td>
Closed list, see annex with an extracted hazards list from RASFF Access
database (where the master data for hazards are kept)
</td> </tr> </table>
**durability date** Best before, sell-by, use-by
## World Health Organization (WHO) FOSCOLLAB
The World Health Organization (WHO), through its Department of Food Safety and
Zoonoses (FOS), initiated a project named FOSCOLLAB to improve ways of sharing
food safety data and information to support risk assessment and decision-
making in food safety. FOSCOLLAB is a platform accessible from internet and
displaying together within dashboards various data (quantitative and
qualitative) and information (e.g. expert advice) useful for food safety
professionals. FOSCOLLAB allows linkages between databases using four
criteria: food name, hazard name, country of origin and year for data
generation.
**FOSCOLLAB Element Explanation**
**Sample collection, prep and** Important context can be added to sampling
information by also **analysis** Reporting the sample size, including units,
and the sample’s representativeness. Where the user is interested in knowing
the prevalence of an analyte, or knowing that an analyte is not present with
an estimated level of confidence, information about the representativeness of
the sample will be very important. In some cases, this is not necessary, for
example, where the user of
FOSCOLLAB is only seeking an indication of the presence of an analyte
**Country of origin of the** Country of origin is necessary in identifying the
country where **sample** contamination occurred
**Why sample was collected** Outbreak investigation, recall verification,
compliance, random sampling/surveillance, monitoring, baseline studies…
**Action Description** Action taken based on laboratory result; e.g.
International
Health Regulation (IHR) risk assessment/notification
**Instrument name** Analytical instrument used to identify analyte ex: Whole-
Genome Sequencing (WGS), platforms, test kits, etc.
## Interoperability
PhasmaFOOD will investigate data interoperability with the aforementioned
standard food safety data models and will also consider additional available
food data models.
Data Interoperability will have to be considered at the layer of the
PhasmaFOOD Cloud Platform and the specification of the platform APIs. Adapting
the existing data models will enable the PhasmaFOOD application to operate in
different environments (e.g. food security checks) and exchange information
with existing systems, thus growing the application potential.
# Data Access and Sharing
Due to the nature of the data involved, some of the results that will be
generated by each project phase will be restricted to authorized users, while
other results will be publicly available. As is our commitment, data access
and sharing activities will be rigorously implemented in compliance with the
privacy and data collection rules and regulations, as they are applied
nationally and in the EU, as well as with the H2020 rules. In the case end-
user testing will be performed, PhasmaFOOD users would be required to pre-
register and consent using the system. Then they will need to authenticate
themselves against a user database. If successful, the users will have roles
associated with them. These roles will determine the level of access that a
user will be given and what they will be permitted to do.
As the raw data included in the data sources will be gathered from the closed
and controlled laboratory experiments, collected measurements will be seen as
highly commercially-sensitive. Therefore, access to raw data can only take
place through the partners involved in the performance of the laboratory
measurements. For the models to function correctly, the data will have to be
included into the PhasmaFOOD cloud database. The results of the food analytics
will be secured and all privacy concerns will be catered during the design
phase. In the cases of trend analytics, anonymization methods will be applied
as part of the built-in cloud platform features.
Publications will be released and disseminated through the project
dissemination and exploitation channels to make external research and market
actors aware of the project as well as appropriate access to the data.
Within the project, our produced conference papers and journal publications
will be Green
Open Access and stored in an appropriate repository – such as OpenAIRE
(European Comission, 2015), Registry of Research Data Repositories (German
Research Foundation, 2015) or Zenodo (CERN Data Centre, 2015).
Since the Data Management is expected to mature during the course of the
project, an updated release of this document will follow on M18, where the
repositories for data storage will be specified and more detailed information
on how these data can be accessed by the wider research community will be
provided.
# Archiving and Preservation
_Short Term_
All original raw data files and data source processing programs will be
versioned over time and maintained in a date-stamped file structure with text
files documenting the background. As the data will be stored in the PhasmaFOOD
cloud repositories, the data will be automatically backed up based on a
standardized schedule. These backups could be brought back online within a
reasonable timeframe that will ensure that there is no detrimental effect of
the data being lost or corrupted.
_Long Term_
It is in the project’s intentions that the long term high quality final data
product generated by PhasmaFOOD will be available for use by the research
community as well as industry peers. We will identify appropriate archiving
institutions that might serve as long term data preservation.
# Data Management Plan Checklist
At the end of the project, we will be carrying out the following checklist to
ensure that we are meeting the criteria to have successfully implemented an
Open Access Data Management Plan. By adhering to the items below, we are
confident that the project will provide open access to the appropriate data
and software, and thereby, enable researchers to utilize the findings of this
project to further expand their knowledge capacity and personal gains as well
as to provide the ITS industry with the necessary tools to advance their
business and processes.
1. Discoverable:
1. Are the relevant data that are to be made available, our project publications or any Open software that has been produced or used in the project, easily discoverable and readily located?
2. Have we identified these by means of a standard identification mechanism?
2. Accessible:
1. Are the data and associated software in the project accessible, where appropriate, and what are the modes of access, scope for usage of this data and what are the licensing frameworks, if any, associated with this access (e.g. licensing framework for research and education, embargo periods, commercial exploitation, etc.)?
3. Useable beyond the original purpose for which it was collected:
1. Are the data and associated software, which are made available, useable by third parties even after the collection of the data?
2. Are the data safely stored in certified repositories for long term preservation and curation?
3. Are the data stored along with the minimum software, metadata and documentation to make them useful?
4. Interoperable to specific quality standards:
1. Are the data and associated software interoperable, allowing data exchange between researchers, institutions, organizations, countries, etc. (e.g. adhering to standards for data annotation, data exchange, compliant with available software applications, and allowing re-combinations with different datasets from different origins)?
# Conclusions
This deliverable has provided an overview on how to build the data collecting
- and sharing plan during the course of the PhasmaFOOD project and after the
project will be finished. This deliverable is regarded as a live document
which will be updated incrementally as the project progresses. This version
sets the overall framework that will form the basis for two additional
iterations on M18 and M36, towards the overall delivery of a comprehensive
document at the end of the project.
In this version of the deliverable, we outlined the descriptions of the Use
Case related Datasets, which are still being collected as part of controlled
laboratory measurements. Standardization and interoperability aspects have
been introduced as well as sharing and access procedures of the project data.
It should be noted that an accurate description of the datasets to be produced
(but also in some cases collected and processed) during the early months of
the project is challenging. It is therefore not feasible to exhaustively list
the datasets that will be subsequently used and/or produced, but to highlight
datasets that the consortium agrees are the most relevant at this stage of the
poject. For these reasons, regular updates of the Data Management Plan are
expected. More specifically, these updates will be done in PhasmaFOOD as part
of the mid- term and final project reviews and at other moments as decided by
the PhasmaFOOD consortium. Updates to the DMP will appear within the
dissemination reports of the project.
The upcoming revisions of this deliverable will focus -among other- to a
fuller presentation of different laboratory experiments data collection,
description of the PhasmaFOOD database characteristics, update of data access
and sharing and update of data interoperability priorities. Data regarding
hardware design and specification as well as software design and
implementation (cloud, mobile and embedded level) will be also addressed.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1243_BDVe_732630.md
|
# Executive Summary
The report outlines the data management policy that will be used by the
Consortium with regard to all the datasets. However, during the negotiation
phase, taking into account the mission of BDVe - _to support the Big Data
Value PPP in realizing a vibrant data-driven EU economy or said in other
words, BDVe will support the implementation of the PPP to be a SUCCESS-_ it
has been agreed that the Reference to Data Management Plan has been removed
_**[opt out]** _ ; however, a deliverable has still been kept to ensure that
the topic is addressed adequately if needed.
This very short deliverable describes the monitoring process in place in BDVe
to assess a potential need.
# Introduction
The Data Management Plan aims at identifying the main research data, generated
or processed within the project that could be subject to possible
restrictions. As defined in the EC Guidelines on FAIR Data Management in H2020
[2]:
_Data Management Plans (DMPs) are a key element of good data management. A DMP
describes the data management life cycle for the data to be collected,
processed and/or generated by a Horizon 2020 project. As part of making
research data findable, accessible, interoperable and re-usable (FAIR), a DMP
should include information on:_
* _the handling of research data during and after the end of the project_
* _what data will be collected, processed and/or generated_
* _which methodology and standards will be applied_
* _whether data will be shared/made open access and_
* _how data will be curated and preserved (including after the end of the project)._
_A DMP is required for all projects participating in the extended ORD pilot,
unless they opt out of the ORD pilot. However, projects that opt out are still
encouraged to submit a DMP on a voluntary basis._
During the negotiation phase, taking into account the mission of BDVe -to
support _the Big Data Value PPP in realizing a vibrant data-driven EU economy
or said in other words, BDVe will support the implementation of the PPP to be
a SUCCESS-_ it has been agreed that the Reference to Data Management Plan has
been removed _**[opt out]** _ ; however, a deliverable has still been kept to
ensure that the topic is addressed adequately if needed. This is described in
p126 of the Grant Agreement number: 732630 — BDVe [1]. We underline that
Research data are primarily intended as the data needed to validate the
results presented in scientific publications.
This very short deliverable describes the aims of BDVe and its communication
strategy, complemented by the monitoring process in place in BDVe to assess a
potential need to amend this document and eventually [opt in] back. In case of
**[opt in]** , the BDVe project will discuss with the commission on how to
make its best effort to implement the best practices explained in [3] and [4]
with respect to open access to research data and eventual associated cost.
# Availability of BDVe project results
As a communication and support action with the mission _to support the Big
Data_
_Value PPP in realizing a vibrant data-driven EU economy_ , the added value of
the BDVe activities largely resides in the largest diffusion of information
and knowledge to the European Big Data ecosystem. BDVe aims to raise the
awareness and promote the exploitation of European data, framework and skills,
notably:
* the exploitation of Open Data provided by European organisations (e.g. European Open Data Portal, ESA data, National and local government data sources),
* the knowledge about European Big Data solutions including commercial frameworks and products on the market, but also promising outcomes of research and innovation activities (including the promotion of technologies and solutions developed along the PPP lifecycle)
* the effort to work on the alignment and coordination between the different EU Member States and between them and the EU strategy,
* the promotion of lighthouse projects (large scale pilots) and innovation and experimentation ecosystems,
* the development of training programs aligned with Industry needs and the promotion of skills exchanges.
Consequently, sharing the information is at the essence of the BDVe success:
this includes dissemination in Internet or by means of people, workshops and
forums: 100% of the deliverables are Public (except management reports).
# Monitoring process for potential [opt in]
Once a year during a BDVe Consortium PCC meeting, the Consortium will check
whether no significant changes have arisen that may contradict the goal of the
Open Research Data Pilot and the [opt out] decision. These goals consist to
improve and maximise access to and re-use of research data generated by
Horizon 2020 projects and to take into account the need to balance openness
and protection of scientific information, commercialisation and Intellectual
Property Rights (IPR), privacy concerns, security as well as data management
and preservation questions.
The updates of this deliverable will be automatic (but not limited to) in the
event of: ▪ changes in the consortium policies,
▪ grant amendment implying the management of new research data, ▪ and changes
in consortium composition.
If needed this deliverable will be amended.
# BDV Meeting held in Galway on May, 17 th 2017
The Data Management Plan and related deliverable content was put at the agenda
of the BDV Meeting in Galway during the WP1 Management update. We discussed
about the data manipulated by each work package and task. The collaboration
platform used daily across Work Packages is SAP JAM platform [5] which is
hosted in Europe and benefits from the security and compliance level of the
SAP Cloud. No BDVe specific Research Data has been identified as we assume
that every BDV PPP project is responsible to develop or to opt out for its own
DMP in conformance with its own grant agreement.
# Conclusion
The BDVe consortium agreed that the data currently managed by the project (no
Research Data has been identified) does not imply to [opt in] back.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1245_FLAIR_732968.md
|
# 1 Introduction
## 1.1 FLAIR Objectives
FLAIR (Flying ultra-broadband single-shot InfraRed sensor) is a Research and
Innovation Action (RIA) funded by the European Union’s H2020 programme under
the Photonics Key Enabling Technologies (KET) topic.
Its core objective is to develop a high-performance air sampling sensor based
on cutting-edge photonic technology capable of performing high-specificity and
high-sensitivity (ppbv) air quality sensing in large areas as a result of its
installation aboard an Unmanned Aerial Vehicle (UAV), also known as a drone.
Today, significant effort is devoted globally to improve air quality through
e.g. land-use planning strategies, replacement of fossil fuels by clean energy
sources and lower level of industrial emission. In order to be successful,
these measures need be accompanied by air quality monitoring at large scale to
ensure compliance with air quality legislation but also to provide information
for political decision making regarding air quality and safety. This is
particularly challenging outside the dense urban network of air quality
monitoring stations. FLAIR addresses this challenge by mounting a high-
performance air sampling sensor based on photonic technology on an UAV for
pervasive and large area coverage high-specificity and high-sensitivity air
quality sensing. Operating in the two atmospheric windows of 2-5 µm and 8-12
µm wavelength, FLAIR can detect minute traces of molecules in complex gas
mixtures from their characteristic IR absorption fingerprints and provide real
time information to the operator of the drone. FLAIR can operate in remote or
dangerous areas and outside of established monitoring networks.
FLAIR applications include the monitoring of air around industrial
infrastructure, maritime and land based traffic, landfills and agriculture
facilities and the project contributes to a safer environment by providing
detailed air quality data around current facilities and locations of interest
or in the case of catastrophic events like wildfires, volcanic eruption or
chemical accidents.
The advantages of using UAVs for this application are essentially related to
the fact that these vehicles can rapidly access areas that are too dangerous
or too difficult to reach by humans. Moreover, due to the local sampling FLAIR
can provide data from inside optically dense clouds and plumes that are not
accessible by ground based laser remote sensing methods.
Photonics technology is a promising approach to the challenge of air quality
monitoring, as it can provide, in principle, accurate identification and
concentration measurements of specific species in complex environments.
Current solutions include several methods for air quality monitoring, among
which are mass spectrometry, electronic noses and optical detection. While
systems based on mass spectrometry are highly sensitive, they suffer from
complexity and high footprint. Electronic noses are cheap but suffer from low
accuracy.
Several systems based on light sources operating in the IR range such as
quantum cascade lasers, diode laser, optical parametric oscillators or
frequency combs have enabled highly sensitive and selective detection of
molecules. Such high performance tools, however, typically remain confined to
academic research laboratories due to their narrow spectral operating window
(covering only very few molecules), their operational complexity and their
prohibitively high cost. These are the technical challenges FLAIR is
addressing.
The FLAIR project will generate trace gas absorption spectra from which
information on the levels of pollutants can be derived through data
processing. The datasets generated throughout the project will be fundamental
to validate the sensor prototype itself, to benchmark its performance against
current standards and to assess the feasibility of using unmanned vehicles as
a new instrument for deploying air quality sensors and implementing mobile
dynamic measuring stations.
The FLAIR project participates in the Pilot on Open Research Data (ORD)
launched by the European Commission (EC) under H2020. The ORD pilot aims to
improve and maximize access to and re-use of research data generated by H2020
projects. As such, the development and use of a Data Management Plan (DMP) is
required for all projects participating in the ORD Pilot.
## 1.2 Purpose of the Data Management Plan
A Data Management Plan consists of a living document that describes the data
management life cycle for the data collected, processed and generated by a
H2020 project. It is considered a key element of good data management. The
plan outlines how data will be created, managed, shared and preserved
throughout the project, providing arguments for any restrictions that apply to
any of these steps or any of the data.
The EC encourages all projects to follow and apply principles that will allow
all research data to be Findable, Accessible, Interoperable and Reusable (FAIR
principles).
The research data generated or created under the projects may include
statistics, results of experiments, measurements, observations resulting from
fieldwork, survey results, interview recordings and images. Open access to
research data is important to allow validation of the results presented by
project researchers and consortia in scientific publications.
As the project evolves and the research progresses, datasets will be created
and may be subject to changes or updates in terms of the types, formats and
origins of the data. Furthermore, the way the data is named or made accessible
may change according to consortium policy changes and/or identification of
potential for exploitation by project partners. Therefore, it is expected that
the DMP will not remain unaltered throughout the project’s lifespan. This
document constitutes the first version of the DMP and an official deliverable
of the project as defined in the Grant Agreement (GA) of FLAIR.
The FLAIR DMP will be updated at months 18 (April 2018) and 36 (December 2019)
in line with the mid-term and final reviews respectively, as recommended in
the EC Guidelines on FAIR Data Management in Horizon 2020.
The obligations to disseminate results (Article 29.1 of the GA), provide open
access to scientific publications (Article 29.2 of the GA) and open access to
research data (Article 29.3 of the GA) do not, in any way, change the
obligation of consortia to protect results, ensure confidentiality obligations
and the security obligations or the obligations to protect personal data, all
of which still apply. Consequently, consortia do not have to ensure open
access to specific parts of their research data if the achievement of the
action's main objective or the exploitation of results would be jeopardised.
Whenever this is the case, the DMP will identify said data and explain the
reasons for not giving access.
## 1.3 Approach
The FLAIR DMP has been created and developed following the Guidelines on FAIR
Data Management in Horizon 2020 published by the EC in 2016 as well as the
UK’s Digital Curation
Center (DCC) guide on How to Develop a Data Management and Sharing Plan from
2011 and the European Research Council’s (ERC) Guidelines on Implementation of
Open Access to Scientific Publications and Research Data from 2017.
## 1.4 Maintenance of the FLAIR DMP
The FLAIR DMP will be maintained throughout the project’s lifespan by the
coordinator with support from the other partners. This activity falls under
WP1 of the project (Project Management).
# 2 FLAIR data
Large amounts of data in different formats will be collected and generated
during the FLAIR project. These will consist mainly of physical parameters,
designs, blueprints for hardware, electronic circuit design, and technical
data. These data will be shared and exploited according to the policies
established under the Grant and Consortium Agreements of the project. All data
collected during the project will be placed in the official FLAIR repository,
where they will be available for authorized persons and will be properly
secured against theft.
In the context of the Pilot on ORD in H2020, all data will be made accessible
to the general public for verification and further re-use. However, it is
necessary to take into account the possibility that some of them will be part
of the Intellectual Property Rights (IPR) of individual partners and therefore
will be protected. The following sections describe the current picture of
datasets to be generated or used in the project and provide answers to the
questions of Annex 1 of the EC Guidelines on FAIR Data Management in Horizon
2020.
## 2.1 FLAIR laboratory dataset
### 2.1.1 Summary
#### 2.1.1.1 Purpose of the data collection/generation
The FLAIR laboratory dataset will comprise the set of measurements of trace
gas absorption spectra accomplished with the FLAIR sensor in a controlled
laboratory environment. The purpose of this dataset is to validate the FLAIR
sensor design and components, and to characterize its performance against
known quantities of the target trace gases.
#### 2.1.1.2 Relation to the objectives of the project
The FLAIR laboratory dataset is fundamental for achieving the project core
objectives. It will be through this dataset that the design of the FLAIR
sensor and the prototype of the sensor and its components will be evaluated
and characterized. The sensor’s performance will be benchmarked against
standard sets of parameters such as the HITRAN database and the PNNL spectral
library described in sections 3.1 and 3.2.
#### 2.1.1.3 Data types and formats
The data comprising the FLAIR laboratory dataset will consist of:
* Trace gas concentrations in ppbv (parts per billion volume).
* Gas absorption spectra in the 2-5 µm and 8-12 µm windows as a set of absorbance per wavelength measurements.
At the moment of writing, the consortium plans to generate the data above for
the following gases (according to project deliverable D2.1 – Requirements for
FLAIR Sensor System):
* CO 2 – Carbon Dioxide
* CH 4 – Methane
* CO – Carbon Monoxide
* O 3 – Ozone
* N 2 O – Nitrous Oxide
* SO 2 – Sulfur Dioxide
* NH 3 – Ammonia
* HCl – Hydrogen Chloride
Comparison results between the results obtained and the standard values of
benchmarks may also be included in the dataset as absolute and relative
differences.
The consortium does not foresee the inclusion of the data processing algorithm
details in the dataset as this can be commercially exploited by the project
partners.
**2.1.1.4 Re-use of existing data**
No existing data will be re-used in the generation of the FLAIR laboratory
dataset.
#### 2.1.1.5 Data origin
The data for the FLAIR laboratory dataset will be generated at the
laboratories of partners of the FLAIR consortium directly by the researchers
involved in the project. The generation procedures will be described in detail
in the appropriate deliverables and associated with the dataset.
**2.1.1.6 Expected size of the data**
At the moment the expected size of the dataset is not known.
#### 2.1.1.7 Data utility
The FLAIR laboratory dataset will be useful to the entire consortium (namely
in helping accomplish the project objectives as outline above) as well as for
other researchers wishing to evolve the work of FLAIR and potential customers
interested in new sensors based on the FLAIR technology.
### 2.1.2 FAIR
#### 2.1.2.1 Making data findable
The FLAIR laboratory dataset will be made discoverable through the association
of metadata to the dataset. At the moment of writing the type of metadata and
identification mechanism to be applied is not yet defined. The process or
standard to be used to create the metadata is not clear yet. However, the
consortium expects to associate the following metadata to the dataset:
* Date of measurement
* Gases measured
* Absorption windows
Files in the dataset will be clearly named and their names will include the
date of measurement, gas to which the file refers to and whether the file
refers to the absorption spectrum or the detected concentration. The following
is an example of such a name: 20180428-FLAIR-CO2AbsSpectrum.ext.
All files in the dataset will allow the clear identification of the version.
This may be achieved through the addition of a version suffix to the filename
or by supporting versioning in the FLAIR repository. The chosen solution is
not defined at the moment. In case the consortium opts for versioning through
the filename, then the file itself will describe in its initial contents the
major changes to the previous version (e.g. different laboratory test run).
#### 2.1.2.2 Making data openly accessible
At the moment of writing, the consortium expects to make the entire FLAIR
laboratory dataset openly available. The consortium expects to make the
dataset available through the project’s repository (which is foreseen to
support versioning). The repository is maintained by the project coordinator
and access to it is authenticated. Access to the repository will be enabled
through a web interface that only allows download of the dataset (i.e. it will
not be possible to delete, upload, check-out or commit other files).
Registration to the repository will be required and will consist in providing
the name, entity and reason of interest or foreseen purpose of use for the
FLAIR dataset.
#### 2.1.2.3 Making data interoperable
FLAIR will provide the data in the FLAIR laboratory dataset in the standard
units for detected concentration and those used in absorption spectra. This
should be enough to ensure the interoperability of the data. Parts per billion
volume (of air) is a commonly accepted unit for measuring concentrations of
compounds in the air. The standard way of presenting an absorption spectrum is
to represent the absorbance (unitless) against the wavelength in µm or nm
depending on the spectrum window of interest.
#### 2.1.2.4 Increasing data re-use
At the moment of writing the consortium has not yet addressed this issue.
However, it is expected that the data will be made available for re-use after
the conclusion of the project. It is not clear if licensing will be applied
nor if an embargo period will be needed.
### 2.1.3 Resources
At the moment, and based on the assumptions and plan above, the costs for
making the FLAIR laboratory dataset FAIR will be covered by the regular
testing activities of the project (WP5 and WP6) and by project management
(WP1, where the responsibility for data management lies). Data management
responsibility lies with the project coordinator through WP1. The consortium
has not analysed or estimated the costs and/or potential benefits of long term
preservation of the FLAIR laboratory dataset.
### 2.1.4 Security
The FLAIR laboratory dataset will not include sensitive data. The dataset will
be stored in the project repository which is hosted in a server of the project
coordinator’s IT infrastructure. The repository supports version control which
should be enough to ensure data recovery in case of accidental deletions. Data
back-ups will be done according to the internal IT policy of the project
coordinator as applicable to all other relevant digital data of the company.
Access to the data will only be possible through authenticated access to the
repository: one account per partner and one account per external individual
(i.e. not belonging to the consortium) requesting access to the data – see
2.1.2.2 above.
### 2.1.5 Ethical aspects
Not applicable.
## 2.2 FLAIR field test dataset
### 2.2.1 Summary
#### 2.2.1.1 Purpose of the data collection/generation
The FLAIR sensor system will be operated at the sensor test facility on the
roof of the suburban air quality monitoring station on the premise of EMPA in
Dübendorf, Switzerland. It will be run for several days next to the inlet of
high precision reference instruments. These parallel measurements are used for
the determination of the sensitivity and the measurement uncertainties of the
FLAIR sensor system for the four target gas species (CO 2 , CH 4 , CO and
O 3 ) under real world conditions.
#### 2.2.1.2 Relation to the objectives of the project
The FLAIR field test dataset is fundamental for achieving the project core
objectives. It will be through this dataset that the design of the FLAIR
sensor and the prototype of the sensor and its components will be evaluated
and characterized under real world conditions.
#### 2.2.1.3 Data types and formats
The data comprising the FLAIR field test dataset will consist of:
* Trace gas concentrations measured by the FLAIR instrument in ppbv (parts per billion volume) for CO 2 , CH 4 , CO and O 3
* Trace gas concentrations measured by the Dübendorf air quality monitoring station in ppbv (parts per billion volume) for CO 2 , CH 4 , CO and O 3
* Absolute difference between the measurements of previous bullets
* Relative difference between the measurements of first two bullets
The consortium does not foresee the inclusion of the data processing algorithm
details in the dataset as this can be commercially exploited by the project
partners.
**2.2.1.4 Re-use of existing data**
No existing data will be re-used in the generation of the FLAIR field test
dataset.
#### 2.2.1.5 Data origin
The data for the FLAIR field test dataset will be generated at the Dübendorf
air quality monitoring stations in Switzerland by researchers directly
involved in the project. The measurement process will be described in detail
in the appropriate deliverable and associated with the dataset.
**2.2.1.6 Expected size of the data**
At the moment the expected size of the dataset is not known.
#### 2.2.1.7 Data utility
The FLAIR field test dataset will be useful to the entire consortium as well
as for other researchers wishing to evolve the work of FLAIR and potential
customers interested in new sensors based on the FLAIR technology.
### 2.2.2 FAIR
#### 2.2.2.1 Making data findable
The FLAIR field test dataset will be made discoverable through the association
of metadata to the dataset. At the moment the consortium expects to associate
the following metadata to the dataset:
* Date of measurement
* Gases measured
* Time of measurement
Files in the dataset will be clearly named and their names will include the
date of measurement and gas to which the file refers to.
Versioning may be achieved through the addition of a version suffix to the
filename or by supporting versioning in the FLAIR repository. The chosen
solution is not defined at the moment although it is already clear that the
repository will support versioning.
#### 2.2.2.2 Making data openly accessible
At the moment of writing, the consortium expects to make the entire FLAIR
laboratory dataset openly available. The consortium expects to make the
dataset available through the project’s repository.
Registration to the repository will be required and will consist in providing
the name, entity and reason of interest or foreseen purpose of use for the
FLAIR dataset.
#### 2.2.2.3 Making data interoperable
FLAIR will provide the data in the FLAIR field test dataset in the standard
units for detected concentration. This should be enough to ensure the
interoperability of the data. Differences will be provided in concentration
units (ppbv) for absolute differences or percentages for relative differences.
**2.2.2.4 Increasing data re-use**
At the moment of writing the consortium has not yet addressed this issue.
### 2.2.3 Resources
The costs of making and maintaining the dataset FAIR will be covered by the
regular testing activities of the project (WP5 and WP6). Data management
responsibility lies with the project coordinator through WP1.
The consortium has not analysed the costs and/or potential benefits of long
term preservation of this dataset.
### 2.2.4 Security
The FLAIR field test dataset will not include sensitive data. The dataset will
be stored in the project repository which is hosted in a server of the project
coordinator’s IT infrastructure. The repository supports version control. Data
back-ups will be done according to the internal IT policy of the project
coordinator. Access to the data will only be possible through authenticated
access to the repository.
### 2.2.5 Ethical aspects
Not applicable
## 2.3 FLAIR UAV dataset
### 2.3.1 Summary
#### 2.3.1.1 Purpose of the data collection/generation
The FLAIR UAV dataset will comprise the following:
* Set of measurements of trace gas absorption spectra and minimum detected concentrations performed by the FLAIR sensor equipped UAV with profiling in o The vertical direction o The radius direction
* Measurements of concentration of gas obtained from the tall tower atmospheric research site in Beromünster (Switzerland);
* Number of particles measured near emission sources (e.g. the motorway in the area of the NABEL site Härkingen in Switzerland)
* Particle concentration measured near emission sources (e.g. the motorway in the area of the NABEL site Härkingen in Switzerland)
The purpose of this dataset is to prove the suitability of the airborne FLAIR
sensor system for vertical profiling of atmospheric trace gases and for
airborne atmospheric measurements in general. It will also serve the purpose
of helping to interpret the measured spatial variation of gases that are
emitted by road traffic emissions (or influenced by road traffic emissions
like O 3 ) and will help to demonstrate the usefulness of the FLAIR system
for other applications.
#### 2.3.1.2 Relation to the objectives of the project
The FLAIR UAV dataset contributes directly to the achievement of the project
core objectives. The vertical mapping of CO 2 , CH 4 and CO at the tall
tower atmospheric research site in Beromünster (Switzerland), where precise
measurements of these atmospheric trace gases are available at different
heights will be compared to the vertical profiles of CO 2 , CH 4 and CO
obtained by the UAV flying the FLAIR sensor by averaging the measurements
obtained at different altitudes. This comparison will confirm or reject the
suitability of the airborne FLAIR sensor system for vertical profiling of
atmospheric trace gases and for airborne atmospheric measurements in general.
The particle sensor measurements will be compared with the concentration
measurements obtained from the FLAIR sensor at a different location and will
contribute to demonstrate the usefulness of the FLAIR system for applications
where toxic gases and particle emission occur simultaneously (such as in road
traffic).
#### 2.3.1.3 Data types and formats
The data comprising the FLAIR UAV dataset will consist of:
* FLAIR sensor trace gas concentrations in ppbv (parts per billion volume) o against height in meters
* against horizontal distance from tower in meters
* FLAIR sensor gas absorption spectra in the 2-5 µm and 8-12 µm windows as a set of absorbance per wavelength measurements o against height in meters
* against horizontal distance from tower in meters
* Beromünster tower trace gas concentrations in ppbv (parts per billion volume) against height in meters.
* Total number of particles measured in the UAV at Härkingen (site to be confirmed) against height in meters and horizontal distance in meters.
* Particle concentration measured in the UAV at Härkingen (site to be confirmed) in total particles per ccm against height in meters and horizontal distance in meters
At the moment of writing, the consortium plans to generate the data above for
the following gases:
* CO 2 – Carbon Dioxide
* CH 4 – Methane
* CO – Carbon Monoxide
The consortium does not foresee the inclusion of the data processing algorithm
details in the dataset as this can be commercially exploited by the project
partners.
**2.3.1.4 Re-use of existing data**
No existing data will be re-used in the generation of the FLAIR UAV dataset.
#### 2.3.1.5 Data origin
The data for the FLAIR UAV dataset will be generated at the locations of the
flights (to be confirmed later during the project) directly by the FLAIR
consortium and the operation of the UAV. The generation procedures will be
described in detail in the appropriate deliverables and associated with the
dataset.
**2.3.1.6 Expected size of the data**
At the moment the expected size of the dataset is not known.
#### 2.3.1.7 Data utility
The FLAIR UAV dataset will be useful to the entire consortium (namely in
helping accomplish the project objectives as outline above) as well as for
other researchers wishing to evolve the work of FLAIR and potential customers
interested in new sensors based on the FLAIR technology.
### 2.3.2 FAIR
#### 2.3.2.1 Making data findable
The FLAIR UAV dataset will be made discoverable through the association of
metadata to the dataset. At the moment of writing the type of metadata and
identification mechanism to be applied is not yet defined. The process or
standard to be used to create the metadata is not clear yet. However, the
consortium expects to associate the following metadata to the dataset:
* Date of measurement
* Gases measured
* Absorption windows
* Time of measurement
* Location of measurement
Files in the dataset will be clearly named. All files in the dataset will
allow the clear identification of the version. This may be achieved through
the addition of a version suffix to the filename or by supporting versioning
in the FLAIR repository. The chosen solution is not defined at the moment.
#### 2.3.2.2 Making data openly accessible
At the moment of writing, the consortium expects to make the entire FLAIR UAV
dataset openly available. The consortium expects to make the dataset available
through the project’s repository (which is foreseen to support versioning).
The repository is maintained by the project coordinator and access to it is
authenticated. Access to the repository will be enabled through a web
interface that only allows download of the dataset (i.e. it will not be
possible to delete, upload, check-out or commit other files).
Registration to the repository will be required and will consist in providing
the name, entity and reason of interest or foreseen purpose of use for the
FLAIR UAV dataset.
#### 2.3.2.3 Making data interoperable
FLAIR will provide the data in the FLAIR UAV dataset in the standard units as
described above in section 2.3.1.1.
**2.3.2.4 Increasing data re-use**
At the moment of writing the consortium has not yet addressed this issue.
### 2.3.3 Resources
The costs of making and maintaining the FLAIR UAV dataset FAIR will be covered
by the regular testing activities of the project (WP5 and WP6). Data
management responsibility lies with the project coordinator through WP1.
The consortium has not analysed or estimated the costs and/or potential
benefits of long term preservation of the FLAIR UAV dataset.
### 2.3.4 Security
The FLAIR UAV dataset will not include sensitive data. The dataset will be
stored in the project repository which is hosted in a server of the project
coordinator’s IT infrastructure. The repository supports version control. Data
back-ups will be done according to the internal IT policy of the project
coordinator. Access to the data will only be possible through authenticated
access to the repository.
### 2.3.5 Ethical aspects
Not applicable
# 3 Other data
## 3.1 HITRAN database
### 3.1.1 Summary
The FLAIR consortium will collect data from the HITRAN (High resolution
TRANsmission molecular absorption) database that will be used to benchmark and
characterize the FLAIR laboratory performance. HITRAN is a compilation of
spectroscopic parameters that a variety of computer codes use to predict and
simulate the transmission and emission of light in the atmosphere. The goal of
HITRAN is to have a self-consistent set of parameters. The database is a long-
running project started by the Air Force Cambridge Research Laboratories
(AFCRL) in the late 1960s in response to the need for detailed knowledge of
the infrared properties of the atmosphere.
The initial HITRAN database included only the basic parameters necessary to
solve the Lambert-Beers law of transmission. In addition, the air-broadened
Lorentz width was included as well as the unique quantum identifications of
the upper and lower states of each transition.
The parameters stored in HITRAN are a mixture of calculated and experimental.
HITRAN provides the sources for the key parameters within each transition
record whereby the user can determine from where the value came. The
experimental data that enter HITRAN often come from the results of analysis of
Fourier transform spectrometer laboratory experiments. Many other experimental
data also are inputted, including lab results from tuneable-diode lasers,
cavity-ring down spectroscopy, heterodyne lasers, etc. The results usually go
through elaborate fitting procedures. The theoretical inputs include standard
solutions of Hamiltonians, ab initio calculations, and semi-empirical fits.
The HITRAN parameters will be compared to the data obtained from the FLAIR
sensor in a lab environment. This way, the consortium will be able to
characterize the performance of the FLAIR sensor and this will help establish
the validity of the FLAIR design and the prototype.
HITRAN parameters can be obtained directly from the HITRAN database website (
_http://hitran.org_ ) and are output in records of 160 characters which
include parameters in integer, real and text data types. The data types and
formats can be consulted at _https://www.cfa.harvard.edu/hitran/formats.html_
.
At the moment of writing the size of the data to be used by FLAIR is not
known. FLAIR will not store HITRAN data nor will it make the data available as
it can be freely obtained directly from HITRAN. It is mentioned here as it
will have an important role in the validation of FLAIR results.
Whenever HITRAN data is used the research results will clearly identify which
parameters have been used.
## 3.2 PNNL spectral library
### 3.2.1 Summary
The Pacific Northwest National Laboratory (PNNL) has created a quantitative
database containing the vapour-phase infrared spectra of pure chemicals. The
digital database has been created with both laboratory and remote-sensing
applications in mind.
This unique library is the "gold standard" of vapour phase infrared reference
spectra and is uniquely adapted to both remote and point sensing. The library
is a unique asset to the development of infrared sensors and methods. Data
from the PNNL spectral library will be used in FLAIR as benchmark against
which the FLAIR sensor’s performance will be measured. This way, the
consortium will be able to characterize the performance of the FLAIR sensor
and this will help establish the validity of the FLAIR design and the
prototype.
PNNL parameters can be obtained directly from the PNNL library website
( _https://secure2.pnl.gov/nsd/nsd.nsf/Welcome_ ) via registration.
At the moment of writing the size of the data to be used by FLAIR is not
known. FLAIR will not make PNNL data available as it can be freely obtained
directly from the PNNL library. Whenever PNNL data is used the research
results will clearly identify which parameters have been used.
## 3.3 UAV performance data
### 3.3.1 Summary
The FLAIR sensor will be installed on a modified Unmanned Aerial System based
on an existing product from TEKEVER Autonomous Systems. While the consortium
will gather and collect performance data for the FLAIR-equipped UAV for the
purpose of characterizing the performance of the system while in flight and to
validate the sensor application, specific UAV performance data will not be
made available outside the consortium.
Specific UAV sub-systems data is proprietary and can be used to infer the
solutions and specific components used in the product. As the UAV is a
commercial product, open access to this data could result in the loss of
competitive advantage in the commercial exploitation of the product. Any data
which is considered relevant for the characterization and validation of the
FLAIR sensor itself while installed in the UAV (e.g. altitude, GPS position,
etc.) will be made available through the FLAIR UAV dataset.
## 3.4 FLAIR system design data
### 3.4.1 Summary
The FLAIR sensor system is described in detail in deliverable D2.3 – List of
specifications doe all system (UAV and assembled sensor) and sub-system
parameters. Designs, blueprints for hardware, electronic circuit design and
technical data for the sub-systems will be generated throughout the project
and will be made available in corresponding deliverables, most of which are of
public dissemination. However, it must be noted that the consortium may
decide, as the project progresses that some specific design details be kept
confidential as they provide the added value or the competitive advantage that
will allow the generating partner(s) to protect and exploit the results.
Examples of these include design and test data related to:
* Data processing algorithm due to its commercial potential;
* Subsystem development;
* UAV adaptation;
* And Sensor/UAV fitting.
Furthermore, while all of the physical components of the FLAIR sensor are
public, open access to the HW and prototypes will not be ensured by the
consortium. Rather, a description of the units will be published but the unit
itself will not be public as it is needed in the FLAIR drone system.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1246_PAPA-ARTIS_733203.md
|
# General
In the PAPA-ARTiS trial, the Clinical Trial Centre (CTC-L) at the University
of Leipzig will be responsible, on behalf of the legal trial sponsor
University of Leipzig, for implementation of procedures for data collection,
storage, protection, retention and destruction. The CTC-L has implemented a
data safety and security concept according to the requirements of the German
Federal Office for Information Security ( _www.bsi.bund.de_ ) . All PAPA-
ARTiS-related procedures will be developed in cooperation with the data
security engineer of the CTC-L and have to be approved by the official data
protection officer of the University of Leipzig prior to implementation.
Chapter 14 of the trial protocol lists all aspects of data collection, storage
and protection.
# Data Collection
Two types of data will be collected in the PAPA-ARTiS trial: a) _clinical
data_ and
b) _imaging data_
Investigators in the recruiting trial centers will initially collect all data.
Together with information on the trial, eligible patients will be informed
about data capture, transmission and analysis processes. Once a patient is
eligible, and has given his/her informed consent to trial participation and
data collection, the investigator will assign the patient a unique patient
identification code. Patient identification code lists will be generated in
advance by the CTC-L and forwarded to the recruiting centers. These lists are
part of the investigator site file and remain at the recruiting site.
Furthermore, these lists are the only documents that allow for
reidentification of the patients.
The CTC-L will design CRFs and develop validated eCRFs (electronic case report
forms) for data capture directly at the trial sites. Additionally, CTC-L is
responsible for the eCRF training to staff of all sites and all monitors. The
investigators (or their designated staff) will enter all clinical data into
these eCRFs. Patient data will be recorded in pseudonymised form (i.e. without
reference to the patient’s name) using exclusively the patient’s
identification code.
# Data Storage
The EDC Tool secuTrial® by InterActiveSystems GmbH will be used for data base
programming. SecuTrial® uses an underlying Oracle database. The study database
will be developed and validated according to the Standard Operating Procedures
(SOPs) of the CTCL prior to data capture. All information entered into the
eCRF by the investigator or an authorized member of the local study team will
be systematically checked for completeness, consistency and plausibility by
routines implemented in the database, running every night. Data management
staff of the CTC-L will check error messages generated by these routines. In
addition, the query management tool of secuTrial® will show discrepancies,
errors or omissions to the investigator/data entry staff based on the pre-
programmed checks. The CTCL will supervise and aid in the resolution of
queries, should the site have questions. Corrected data will be re-checked by
automatic routines during the night after entry.
If a query cannot be resolved, the Data Management staff of the CTC-L will ask
the coordinating investigator and/or the biometrician, if they may close the
query (e.g. if it is clear that this data cannot/ was not collected).
# Data Protection
## Access, Changes and Monitoring
During the whole course of the study, all data undergo a daily backup. An
access concept for the trial database will be implemented, based on a strict
hierarchy and role model. Thus, data access is limited to authorized persons
only, and unauthorized access to pseudonymised patient data is prevented. Any
change of data (e.g. error correction during query management) is recorded
automatically via an audit trail within the database. At the end of the study,
once the database has been declared complete and accurate, the database will
be locked. Thereafter, any changes to the database are possible only by joint
written agreement between coordinating investigator, biometrician and data
manager.
Clinical monitors appointed by the CTC-L (for Germany) or ECRIN (for all other
involved countries) will regularly visit the recruiting centres and verify the
informed consent forms. Only after monitors confirmed that a patient has
unambiguously given his or her consent for trial participation as well as for
data capture, transmission and analysis, the data will be used for analyses.
## Pseudonymisation
Local investigators will be trained by the CTC-L prior to study start on the
pertinent procedures for pseudonymisation. There is no risk for
pseudonymisation failure as far as the eCRFs are concerned, because no
identifying data will be entered in the eCRF. However, pseudonymisation
failures may arise with the imaging data. It may happen that investigators
upload images labelled with the patient’s name. A SOP will be developed by the
CTC-L together with the imaging reference centres on how to deal with this
situation (e.g. ask the responsible investigator to delete the non-
pseudonymised record and upload a pseudonymised record instead, retrain
investigators at the site concerned, inform the trial sponsor on the problem).
Additionally, human cells/tissue will be collected in a sub-group of patients
for a scientific subproject. Labelling of the samples will be exclusively with
the trial identification number of the trial participant. Samples will be
processed, stored and analyzed only by using the trial identification number.
Any scientific research making use of the data beyond what is spelled out in
the protocol of the clinical trial will be conducted in accordance with the
applicable law on data protection and the patient will be asked explicitly to
provide consent on participation in the scientific projects and pseudonymised
storage and use of his/her samples.
Since in the course of the trial contact between the trial centre and the
patients might be necessary, the patients’ full name, address and telephone
number will be ascertained and stored at the treating trial site after
obtaining written permission to do so. This information will be stored
separately from the trial data.
## Withdrawal of Consent
Patients may withdraw their consent to participate at any time without giving
reasons. Nevertheless, the patient should be asked for the reason of the
premature termination after being informed that he/she does not need to do so.
Information as to when and why a patient was registered/randomized and when
he/she withdrew consent must be retained in the documentation.
In the event of withdrawal of consent, the necessity for storing data and/or
samples will be evaluated. While Regulation (EC) No 45/2001 of the European
Parliament and of the Council [1] strengthen personal data protection rights,
encompassing the right to access, rectification and withdrawal of data, it
also specifies the situations when restriction on those rights may be imposed.
The withdrawal of informed consent should not affect the results of activities
already carried out, such as the storage and use of data obtained on the basis
of informed consent before. Data not needed will be deleted as requested, with
full documentation of the reasons for deletion. Similarly, samples will be
discarded as wished.
## Data Exchange
An “ownCloud” service will be used as a file-sharing platform in the PAPA-
ARTiS trial. Hosting and maintenance of the “ownCloud” service takes place at
the University of Leipzig, behind the firewall of the institution.
The “ownCloud” file-hosting system will be used for exchange of central trial
documents as well as for imaging data for reference evaluation of the CT/MRIs.
Access to the “ownCloud” service follows the same hierarchical role concept as
the trial database. Data will be uploaded without personal information, using
exclusively the trial identification number. Trial centers will only be able
to upload data and see data concerning their own patients, while the reference
organization may exclusively download data essential for their evaluation.
Using an eCRF as well as the “ownCloud” file-hosting system, both located on
servers of the CTC-L and thus behind the firewall of the University of
Leipzig, reduces the risk of unauthorized or unlawful access, disclosure,
dissemination, alteration, destruction or accidental loss in comparison to
data transmission over a network. Access to the servers is secured via https
protocol, and requires user-specific login and password.
## Transfer of Personal Data
The coordinating investigator certifies herewith that the transfer of
pseudonymized personal data will take place according to the documentation and
communication regulations of the GCP-guidelines (E6; 2.11) [2]. Moreover, the
coordinating investigator certifies that trial participants who do not permit
the transfer of data will not be admitted to the trial.
# Data Retention and Destruction
After reaching the study aim/after finishing of all concomitant scientific
projects, personal data will be stored in an anonymous manner for 30 years.
The data sets made available to the scientific community (for dissemination
purposes) will be raw and anonymized and will be released one year after the
completion of the trial, when registration is completed.
# Adherence to National and EU Legislation
We hereby confirm that all clinical trial information will be recorded,
processed, handled, and stored by CTC-L on behalf of the coordinating
investigator in such a way that it can be accurately reported, interpreted and
verified while the confidentiality of records and the personal data of the
subjects remain protected. This is done in accordance with the applicable law
on personal data protection, with Directive 95/46/EEC [3] and regulation (EC)
No 45/2001 of the European parliament and of the council [1]. This also
accounts for the reference centers and the investigators at the trial sites.
The clinical site contracts will also ensure that the clinical sites comply
with national data protection laws.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1249_One-Flow_737266.md
|
_**_Keywords_ ** _
Keywords must be provided when uploading the dataset in 4TU.Centre for
Research Data.
# __Metadata_ _
The platforms with overviews of metadata schemas that are typically
recommended by funders (e.g, _Research Data Alliance_ , _Digital Curation
Centre_ and _Datacite_ ) do not provide schemes that are suitable for use in
this project. Therefore, to make the data interchangeable between the project
partners and future users of the data, we will develop our own schematic
overview of metadata that will be collected and described.
As stated above, the discovery metadata are governed by the Datacite metadata
schema of 4TU.Centre for research Data.
## 2.2 Making data openly accessible
Relevant data that substantiate findings in peer‐reviewed scholarly
publications produced by the ONEFLOW project will be made available open
access (and as a default, without embargo) via the data repository 4TU.Centre
for Research Data. Data in 4TU.Centre for Research Data are archived/preserved
for the long term with a minimum of 15 years.
# __Metadata/Documentation/Provenance_ _
As part of the dataset deposited in 4TU.Centre for Research Data, a data
(provenance) guide will be deposited with data specific information
(parameters and/or variables used, column headings, codes/symbols used, etc.)
and with information on the provenance of the data. When software is needed to
reuse the data, the code will also be deposited together with the data itself.
# __Access restrictions_ _
The project will generate research data in a wide range of levels of detail
from simulation and lab results to demonstrator validation. Most data
associated with results that may have potential for commercial or industrial
protection cannot be made available open access in general due to intellectual
property protection restrictions. Concerning the other and expected major
part, relevant data necessary for the verification of results published in
scientific journals can be made accessible on a case by case basis. The
decision concerning the publication of data will be made by the Management
Board, as the decisionmaking body of the consortium. Research data and general
documentation of public interest such as those underlying scientific
publications will be made accessible via the ONE‐FLOW website and data
repositories, such as 4TU.Centre for Research Data (see above).
_Data storage/archiving at project level_
The ONE‐FLOW Surfdrive site serves to exchange all general information and
documents like MB and GA meeting minutes, WP related documents, Grant
Agreement and Consortium Agreement, deliverables, publications, etc. This
platform shall also facilitate the exchange of research data between the
ONE‐FLOW partners. Relevant data (i.e., results to validate results in
scientific publications) will be stored on Surfdrive during the research phase
and will then be archived/published in 4TU.Centre of Research Data.
The team room for information and document exchange of the ONE‐FLOW project is
accessible for all partners, ONE‐FLOW staff members and Finance members via
the domain www.surfdrive.surf.nl. All Dutch university members have been added
after the standard log‐in procedure. All non‐Dutch university members and the
EC Officer have received the required login information including their
password per email on February 20th, 2017.
The user welcome screen features the following at a glance information: (i)
financial documents; (ii) meetings; (iii) official ONE‐FLOW documents; (iv)
presentations; and (v) work packages. The EC officers can view all documents.
The financial staff members only have access to the financial document and can
(i) read and change documents. The administrator can (i) incorporate, edit,
delete partners, (ii) incorporate, edit, delete and disable users, (iii)
incorporate, edit, delete work packages, tasks, and sub‐tasks, and (iv)
incorporate deliverables.
The work package leader in addition can create deliverable versions and
receives a reminder with increasing frequency three weeks in advance from the
date the respective deliverable is due. The reminder allows direct access to
the respective deliverable/document.
## 2.3 Making data interoperable
Since our data for dissemination will largely be made available via 4TU.Centre
for Research Data, interoperability of the data will be effectuated by the
repository’s policy. For data deposited in 4TU.Centre for Research Data we
will try to adhere as much as possible to their preferred _data/file_
_formats_ .
## 2.4 Reuse of data (user licenses)
Datasets deposited in 4TU.Centre for Research Data allow users to choose from
a menu of usage licenses (mainly Creative Commons and ODB licenses). We will
decide upon deposition of datasets which license appears most suitable. Data
will become available for re‐use immediately after publication in 4TU.Centre
for Research Data. Typically there will be no embargo period for the data.
4TU.Centre for Research Data archives data for a minimum of 15 years. Data
from the ONE‐FLOW project are provided in a format that is recommended by
4TU.Centre for Research Data for long term preservation and re‐use. 4TU.Centre
for Research Data has received a Data Seal of Approval which guarantees that
the research data deposited here will continue to be able to be found and
shared in the future.
# 3 Allocation of Resources
__3.1 Costs and value_ _
There are not additional data management costs foreseen that require
additional/separate budgeting.
Storage/archiving facilities for data during the research project are
available as in‐house facilities at the universities/institutes. Costs for
these facilities are typically borne by the departments.
Long‐term archiving/publication in 4TU.Centre for Research Data is in
principle subject to costs that would need to be budgeted, but TU/e
researchers can deposit datasets <100 GB in the repository at no cost. Since
we expect to produce a total amount of data smaller than 100 GB, no costs need
to be covered.
It is impossible to predict the future value of the data that will be
deposited, for example because there are no datasets yet (to our knowledge)
that allow us to benchmark the potential value against existing datasets.
“Immaterial benefits” of publishing the data in a data repository are
addressed under Section 1 (Data Utility).
__3.2 Responsibilities_ _
Lead for this task is TU/e (Van Hest, formerly Hessel), and co‐lead with TU
Graz (Gruber‐Woelfler) and MIC (Tekautz). Yet, all partners are involved in
providing support and input to complete the DMP.
The ownership of data is regulated by the Consortium Agreement. The data owner
will primarily store its data on its own servers following its internal data
management procedures. Most partners have a dedicated Data Protection Officer
(DPO) and some partners have a Research Data Management team (RDM team).
# 4 Data Security
At all institutions data security is governed by the ICT departments. Data
security policies are typically at the institutional level (occasionally
supported by departmental support policies/practices).
# 5\. Ethical aspects
ONE‐FLOW does not handle personal data except for actors appearing in shoots
and pictures, in which case the partners follow the standard procedure of
getting authorization from the actors to show and distribute the videos and
images to the public.
# 6 Other (i.e, national/funder/sectorial/departmental for data management)
As of yet, there are to our knowledge no national policies on research data
management. However, the TU/e’s leading principles for data management are
outlined in the national _Code of Scientific_ _Conduct_ and the university’s
own _TU/e Code of Scientific Conduct_ . We have used these principles as a
supporting guide for organizing our RDM.
# Appendix 1: Detailed overview of data types and formats
The most common data types to be generated, collected and used in ONE‐FLOW are
listed below per work package. The data range from typical data used in
reaction chemistry, supramolecular chemistry/functional materials, reactor
engineering and process/sustainability modelling and design. Information
related to each type of data and its concrete expressions is given. Most
relevant is here the file format, the total file size expected, and the
expected data‐access status after the project.
## A1.1 Work Package 1 ‐ Data Inventory
Work Package 1 deals with supramolecular chemistry/functional materials. Four
partners (color‐coded) filled in (see Table 4). The total data size will
likely approach about 15 GB. The partners, however, have different data needs
announced ranging from MBs to GBs. The sharing of data has been classified in
four categories: open access, open‐embargo, limited share and share only
(explanation of the terms in the legend of Table A1). A discussion on the next
GA and possible by a telecon will aim at unifying the decision path to sharing
data.
**Table A1** _Overview of expected data types and further information,
including the sharing policy, in WP 1._ _Clarification of the terms in the
last column: Open = data can be made public open access without embargo;
Open‐embargo = data can be made public open access after an embargo period;
Share only = data cannot be made public and are only shared between (some)
project members; access to data is controlled; Limited Share = data cannot be
made public and cannot be shared between all project members; access to data
is restricted._
## A 1.2 2
Work 2 deals with reaction chemistry for cascades and their transfer into
micro‐flow. Four partners (color‐coded) filled in (see Table 5). The total
data size will likely be in the order of 10‐15 GB. The partners, however, have
different needs announced from MB to GB. That might be unified by unifying the
data considered. The sharing of data is majorly open access and frequently
open‐embargo; here and there chosen is also limited share and share only. A
discussion on the next GA and possible by a telecon can aim at unifying the
decision path to sharing data.
**Table A2** _Overview of expected data types and further information,
including the sharing policy, in WP 2._ **A1.3 3**
Work 3 deals with process engineering to achieve fully compatible process
ingredients, thereby eliminating the need for compartmentation and separation.
Three partners (color‐coded) filled in (see Table 6). The total data size will
likely be in the order of 10‐15 GB. The partners, however, have different
needs announced from MB to GB. That might be unified by unifying the data
considered. The sharing of data switches between open access and limited share
/ share only. A discussion on the next GA and possible by a telecon can aim at
unifying the decision path to sharing data.
**Table A3** _Overview of expected data types and further information,
including the sharing policy, in WP 3._ **A1.4 4**
Work 4 deals with the digitalization of the new factory approach. Four
partners (color‐coded) filled in (see Table 7). The total data size will
likely be in the order of 10‐15 GB. The partners, however, have different
needs announced from KB to GB. That might be unified by unifying the data
considered. The sharing of data is from open access to open‐embargo to limited
share and share only. A discussion on the next GA and possible by a telecon
can aim at unifying the decision path to sharing data.
**Table A4** _Overview of expected data types and further information,
including the sharing policy, in WP 4._
## A1.5 5
Work Package 5 deals with reactor engineering and process/sustainability
modelling and design Three partners (color‐coded) filled in (see Table 8). The
total data size will likely be in the order of 1‐2 GB. The partners have
similar needs announced. The sharing of data is for a large part share only,
which relates to the fact that here a platform technology is developed by a
SME, aiming at commercialization. The academic partners have chosen open
access. A discussion on the next GA and possible by a telecon can aim at
unifying the decision path to sharing data.
**Table A5** _Overview of expected data types and further information,
including the sharing policy, in WP 5._
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1250_EFFECT_737301.md
|
# Introduction
EFFECT is a H2020 funded project under the FET Programme aiming to enhance
visibility and impact of FET research in a wide diversity of actors
(researchers, industry, policy makers, civil society organisations, citizens
etc.) and to stimulate debate and collaboration among multiple stakeholders
through dedicated community building and public engagement activities.
Research data is as important as the publications they support. Even if EFFECT
is not going to produce research data, as it will rather treat existing public
ones, specific datasets will be generated from its analysis, communication and
engagement activities. Hence the importance for EFFECT to define a data
management policy.
This document introduces the Data Management Plan (DMP). It will primarily
list the different datasets that will be produced by the project, the main
exploitation perspectives for each of those datasets and the major management
principles the project will implement to handle those datasets.
The purpose of the DMP is to provide an analysis of the main elements of the
data management policy that will be used by the consortium with regard to all
the datasets that will be generated by the project.
## Datasets Description
The EFFECT project partners have identified the datasets that will be produced
during the different phases of the project. The list is provided below, while
the nature and details for each dataset are given in the subsequent sections.
<table>
<tr>
<th>
**Number**
</th>
<th>
**Name**
</th>
<th>
**Responsible Partner**
</th> </tr>
<tr>
<td>
**1**
</td>
<td>
EFFECT Website and Newsletter Subscribers
</td>
<td>
YOURIS
</td> </tr>
<tr>
<td>
**2**
</td>
<td>
FET projects Database
</td>
<td>
ZABALA
</td> </tr>
<tr>
<td>
**3**
</td>
<td>
EFFECT Events (workshops, webinars, events) subscribers
</td>
<td>
ALL
</td> </tr> </table>
Table 1. List of datasets
### Personal Data Protection
For some of the activities to be carried out by the project, it may be
necessary to collect basic personal data (e.g. full name, contact details,
background), even though the project will avoid collecting such data unless
deemed necessary.
Such data will be protected in compliance with the EU's Data Protection
Directive 95/46/EC1 aiming at protecting personal data, as described in D6.1
POPD - Requirement No. 1.
All data collected by the project will be done after giving data subjects full
details on their treatment and after obtaining signed informed consent forms.
# Data Management Plan
## EFFECT Website and Newsletter Subscribers
<table>
<tr>
<th>
**Data identification**
</th> </tr>
<tr>
<td>
**Dataset description** Mailing list containing email addresses and names of
all subscribers
</td> </tr>
<tr>
<td>
**Source** This dataset is automatically generated when visitors sign up to
the newsletter form available on the project website.
</td> </tr>
<tr>
<td>
**Partners responsibilities**
</td> </tr>
<tr>
<td>
**Partner owner of the data** youris.com
</td> </tr>
<tr>
<td>
**Partner in charge of the data collection** youris.com
</td> </tr>
<tr>
<td>
**Partner in charge of the data analysis** youris.com
</td> </tr>
<tr>
<td>
**Partner in charge of the data storage** youris.com
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
**Info about metadata (production and** N/A **storage dates, places) and
documentation**
</td> </tr>
<tr>
<td>
**Standards, format, estimated volume of** This dataset can be imported from,
and exported to a CSV, TXT or **data** Excel file.
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
The mailing list will be used for disseminating the project news to **Data
exploitation (purpose/use of the** a targeted audience.
**data analysis)**
</td> </tr>
<tr>
<td>
As it implies personal data, the access to the dataset is restricted **Data
access policy / Dissemination level:** to the EFFECT consortium.
**confidential (only for members of the Consortium and the Commission
Services)** **or Public**
</td> </tr>
<tr>
<td>
**Data sharing, re-use, distribution,** None **publication**
</td> </tr>
<tr>
<td>
The mailing list contains personal data (names and email ** Personal data
protection: are they ** addresses of newsletter subscribers). People
interested in the ** personal data? If so, have you gained ** project
voluntarily register, through the project website, to **(written) consent from
data subjects to** receive the project newsletter. They can unsubscribe at any
time. **collect this information?**
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
**Data storage (including backup): where?** The dataset will be preserved in
youris.com server during the
**For how long?** whole project duration
</td> </tr> </table>
## FET projects Database
<table>
<tr>
<th>
**Data identification**
</th> </tr>
<tr>
<td>
**Dataset description** This dataset contains the names, contact details,
email addresses, of 170 FET projects’ coordinators and other partners.
</td> </tr>
<tr>
<td>
**Source** This dataset is generated via desk research of publicly available
information and direct contacts collected directly via the interested parties
to perform Task 2.2 (Searching for involvement and commitment of results’
owners, interviewing the projects).
</td> </tr>
<tr>
<td>
**Partners responsibilities**
</td> </tr>
<tr>
<td>
**Partner owner of the data** ZABALA
</td> </tr>
<tr>
<td>
**Partner in charge of the data collection** ZABALA
</td> </tr>
<tr>
<td>
**Partner in charge of the data analysis** ZABALA
</td> </tr>
<tr>
<td>
**Partner in charge of the data storage** ZABALA
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
**Info about metadata (production and** N/A **storage dates, places) and
documentation**
</td> </tr>
<tr>
<td>
**Standards, format, estimated volume of** This dataset can be imported from,
and exported to a CSV, TXT or **data** Excel file.
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
The mailing list will be used for contacting projects in order to ** Data
exploitation (purpose/use of the ** identify outputs and/or stories of these
projects to be later
** data analysis) ** communicated through a mix of communication formats and
distribution channels.
</td> </tr>
<tr>
<td>
As it implies personal data, the access to the dataset is restricted **Data
access policy / Dissemination level:** to the EFFECT consortium only.
**confidential (only for members of the Consortium and the Commission
Services)** **or Public**
</td> </tr>
<tr>
<td>
**Data sharing, re-use, distribution,** None **publication**
</td> </tr>
<tr>
<td>
The mailing list contains personal data (names and email **Personal data
protection: are they** addresses) publicly available online. **personal data?
If so, have you gained (written) consent from data subjects to collect this
information?**
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
**Data storage (including backup): where?** The dataset will be preserved in
ZABALA’s server. **For how long?**
</td> </tr> </table>
## EFFECT Events (workshops, webinars, events) subscribers
<table>
<tr>
<th>
**Data identification**
</th> </tr>
<tr>
<td>
**Dataset description** This dataset contains the names, contacts, email
addresses, of
users interested subscribing to EFFECT events (webinars, workshops, Meet&Match
events).
</td> </tr>
<tr>
<td>
**Source** This dataset is automatically generated when users sign up to the
event form available on the project website or on the event website.
</td> </tr>
<tr>
<td>
**Partners responsibilities**
</td> </tr>
<tr>
<td>
**Partner owner of the data** YOURIS, in case of subscription on the EFFECT
project website. The organizers of events, not directly managed by EFFECT, in
case of online subscription on their websites.
</td> </tr>
<tr>
<td>
**Partner in charge of the data collection** All partners (depending on the
organisation of the specific event)
</td> </tr>
<tr>
<td>
**Partner in charge of the data analysis** All partners (depending on the
organisation of the specific event)
</td> </tr>
<tr>
<td>
**Partner in charge of the data storage** All partners (depending on the
organisation of the specific event)
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
**Info about metadata (production and** N/A **storage dates, places) and
documentation**
</td> </tr>
<tr>
<td>
**Standards, format, estimated volume of** This dataset can be imported from,
and exported to a CSV, TXT or **data** Excel file.
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
The mailing list will be used for informing the interested users ** Data
exploitation (purpose/use of the ** about a specific event organised within
the framework of the **data analysis)** project.
</td> </tr>
<tr>
<td>
As it implies personal data, the access to the dataset is restricted **Data
access policy / Dissemination level:** to EFFECT consortium. **confidential
(only for members of the Consortium and the Commission Services)** **or
Public**
</td> </tr>
<tr>
<td>
**Data sharing, re-use, distribution,** None **publication**
</td> </tr>
<tr>
<td>
The mailing list contains personal data (names and email ** Personal data
protection: are they ** addresses). People interested in the event
voluntarily register, **personal data? If so, have you gained** through the
project website or the event page, to receive info
**(written) consent from data subjects to** about the event. They can
unsubscribe at any time. **collect this information?**
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
**Data storage (including backup): where?** The dataset will be preserved in
the interested partners’ server. **For how long?**
</td> </tr> </table>
# Conclusion
This Data Management Plan provides an overview of the data that the EFFECT
project will produce together with related challenges and constraints that
need to be taken into consideration.
The analysis contained in this report allows anticipating the procedures and
infrastructures to be implemented by EFFECT PROJECT to efficiently manage the
data it will produce.
Nearly all project partners will be owners or/and producers of data, which
implies specific responsibilities, described in this report.
The EFFECT project Data Management Plan puts a strong emphasis on the
appropriate collection, storing and preservation of those datasets.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1251_EFFECT_737301.md
|
**Executive Summary**
This document is a deliverable of the EFFECT project, which is funded by the
European Union’s Horizon 2020 FET Programme under grant agreement No. 737301.
This document is the updated version of D1.7 Data Management Plan (DMP),
released in Month 6 (June 2017).
It describes what kind of data the project generated, how they were produced
and analysed. It also details how the data related to the EFFECT project were
disseminated and afterwards shared and preserved.
Moreover, the DMP covers the following aspects:
* Identification of datasets that were produced during the different phases of the project;
* Description of Personal Data Protection approach;
* List of the major management principles the project implemented to handle those datasets.
The EFFECT project Data Management Plan puts a strong emphasis on the
appropriate collection, storing and preservation of those datasets.
_**Table of Content** _
1 Introduction 6
2 GDPR 7
2.1 Data controller and Data processor 7
2.2 Personal Data within EFFECT 8
2.3 Privacy Policy on EFFECT website 8
3 Datasets Description 9
4 Data Management Plan 10
4.1 EFFECT Website and Newsletter Subscribers 10
4.2 FET projects Database 11
4.3 EFFECT Events (workshops, webinars, events) subscribers 12
5 Conclusions 13
# Introduction
EFFECT is a H2020 funded project under the FET Programme aiming to enhance
visibility and impact of FET research in a wide diversity of actors
(researchers, industry, policy makers, civil society organisations, citizens
etc.) and to find synergy and collaboration among multiple stakeholders
through dedicated community building and public engagement activities.
During the two years of the project duration, EFFECT has generated specific
datasets from its analysis, communication and engagement activities. For this
reason, with the first release of the Data Management Plan (D1.7), EFFECT
defined a data management policy to be applied to every dataset created.
In 2018 a new regulation about data protection and privacy for all individuals
within the European Union (EU) and the European Economic Area (EEA) has come
into force.
Hence each EFFECT partner has introduced new processes and policies to manage
data.
Within the EFFECT consortium, youris.com G.E.I.E. is the partner responsible
for communication and dissemination activities. The personal data and the data
treatment described below are managed by youris.com G.E.I.E. staff, who act
according to high and proven professionalism in the design of communication
strategies and support the other consortium partners in the management of
these data where required.
This document aims to describe the datasets produced during the project, the
main exploitation perspectives for each of those datasets and the major
management principles the project has implemented to handle those datasets.
# GDPR
The General Data Protection Regulation 1 (EU) 2016/679 ("GDPR") is a
regulation in EU law on data protection and privacy for all individuals within
the European Union (EU) and the European Economic Area (EEA). The GDPR aims
primarily to give control to individuals over their personal data and to
simplify the regulatory environment for international business by unifying the
regulation within the EU.
Controllers of personal data must put in place appropriate technical and
organisational measures to implement the data protection principles. Business
processes that handle personal data must be designed and built with
consideration of the principles and provide safeguards to protect data (for
example, using pseudonymization or full anonymization where appropriate), and
use the highest-possible privacy settings by default, so that the data is not
available publicly without explicit, informed consent, and cannot be used to
identify a subject without additional information stored separately. No
personal data may be processed unless it is done under a lawful basis
specified by the regulation, or unless the data controller or processor has
received an unambiguous and individualized affirmation of consent from the
data subject. The data subject has the right to revoke this consent at any
time.
When data is collected, data subjects must be clearly informed about the
extent of data collection, the legal basis for processing of personal data,
how long data is retained, if data is being transferred to a third-party
and/or outside the EU, and disclosure of any automated decision-making that is
made on a solely algorithmic basis. Data subjects must be provided with
contact details for the data controller and their designated data protection
officer, where applicable. Data subjects must also be informed of their
privacy rights under the GDPR, including their right to revoke consent to data
processing at any time, their right to view their personal data and access an
overview of how it is being processed, their right to obtain a portable copy
of the stored data, the right to erasure of data under certain circumstances,
the right to contest any automated decision-making that was made on a solely
algorithmic basis, and the right to file complaints with a Data Protection
Authority. The GDPR has applied to organisations in Europe since 25 May 2018.
## Data controller and Data processor
The **data controller** is the person (or business) who determines the
purposes for which, and the way in which, personal data is processed. The
**data processor** is anyone who processes personal data on behalf of the data
controller. The same organization can be both a data controller and data
processor, and it is perfectly possible for two separate organizations to be
data processors of the same data.
GDPR states that controllers must make sure it's the case that personal data
is processed lawfully, transparently, and for a specific purpose.
That means people must understand why their data is being processed, and how
it is being processed, while that processing must abide by GDPR rules.
## Personal Data within EFFECT
For some of the activities carried out by the project, it was necessary to
collect basic personal data (e.g. full name, contact details, background),
even though the project avoid collecting such data unless deemed necessary.
Such data are protected in compliance with GDPR 679/2016 by the EFFECT
Consortium.
## Privacy Policy on EFFECT website
On EFFECT website the privacy policy has been updated to meet the new GDPR.
# Datasets Description
The EFFECT project partners have identified the datasets that have been
produced during the different phases of the project. The list is provided
below, while the nature and details for each dataset are given in the
subsequent sections.
<table>
<tr>
<th>
**Number**
</th>
<th>
**Name**
</th>
<th>
**Responsible Partner**
</th> </tr>
<tr>
<td>
**1**
</td>
<td>
EFFECT Website and Newsletter Subscribers
</td>
<td>
YOURIS
</td> </tr>
<tr>
<td>
**2**
</td>
<td>
FET projects Database
</td>
<td>
ZABALA
</td> </tr>
<tr>
<td>
**3**
</td>
<td>
EFFECT Events (workshops, webinars, events) subscribers
</td>
<td>
ALL
</td> </tr> </table>
Table 1. List of datasets
# Data Management Plan
## EFFECT Website and Newsletter Subscribers
<table>
<tr>
<th>
**Data identification**
</th> </tr>
<tr>
<td>
**Dataset description** Mailing list containing email addresses and names of
all subscribers
</td> </tr>
<tr>
<td>
**Source** This dataset is automatically generated when visitors sign up to
the newsletter form available on the project website.
</td> </tr>
<tr>
<td>
**Partners responsibilities**
</td> </tr>
<tr>
<td>
**Partner owner of the data** youris.com
</td> </tr>
<tr>
<td>
**Partner in charge of the data collection** youris.com
</td> </tr>
<tr>
<td>
**Partner in charge of the data analysis** youris.com
</td> </tr>
<tr>
<td>
**Partner in charge of the data storage** youris.com
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
**Info about metadata (production and** N/A **storage dates, places) and
documentation**
</td> </tr>
<tr>
<td>
**Standards, format, estimated volume of** This dataset can be imported from,
and exported to a CSV, TXT or **data** Excel file.
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
The mailing list will be used for disseminating the project news to **Data
exploitation (purpose/use of the** a targeted audience.
**data analysis)**
</td> </tr>
<tr>
<td>
As it implies personal data, the access to the dataset is restricted **Data
access policy / Dissemination level:** to the EFFECT consortium.
**confidential (only for members of the Consortium and the Commission
Services)** **or Public**
</td> </tr>
<tr>
<td>
**Data sharing, re-use, distribution,** None **publication**
</td> </tr>
<tr>
<td>
The mailing list contains personal data (names and email ** Personal data
protection: are they ** addresses of newsletter subscribers). People
interested in the ** personal data? If so, have you gained ** project
voluntarily register, through the project website, to **(written) consent from
data subjects to** receive the project newsletter. They can unsubscribe at any
time. **collect this information?**
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
**Data storage (including backup): where?** The dataset will be preserved in
youris.com server during the
**For how long?** whole project duration
</td> </tr> </table>
## FET projects Database
<table>
<tr>
<th>
**Data identification**
</th> </tr>
<tr>
<td>
**Dataset description** This dataset contains the names, contact details,
email addresses, of 170 FET projects’ coordinators and other partners.
</td> </tr>
<tr>
<td>
**Source** This dataset is generated via desk research of publicly available
information and direct contacts collected directly via the interested parties
to perform Task 2.2 (Searching for involvement and commitment of results’
owners, interviewing the projects).
</td> </tr>
<tr>
<td>
**Partners responsibilities**
</td> </tr>
<tr>
<td>
**Partner owner of the data** ZABALA
</td> </tr>
<tr>
<td>
**Partner in charge of the data collection** ZABALA
</td> </tr>
<tr>
<td>
**Partner in charge of the data analysis** ZABALA
</td> </tr>
<tr>
<td>
**Partner in charge of the data storage** ZABALA
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
**Info about metadata (production and** N/A **storage dates, places) and
documentation**
</td> </tr>
<tr>
<td>
**Standards, format, estimated volume of** This dataset can be imported from,
and exported to a CSV, TXT or **data** Excel file.
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
The mailing list will be used for contacting projects in order to ** Data
exploitation (purpose/use of the ** identify outputs and/or stories of these
projects to be later
** data analysis) ** communicated through a mix of communication formats and
distribution channels.
</td> </tr>
<tr>
<td>
As it implies personal data, the access to the dataset is restricted **Data
access policy / Dissemination level:** to the EFFECT consortium only.
**confidential (only for members of the Consortium and the Commission
Services)** **or Public**
</td> </tr>
<tr>
<td>
**Data sharing, re-use, distribution,** None **publication**
</td> </tr>
<tr>
<td>
The mailing list contains personal data (names and email **Personal data
protection: are they** addresses) publicly available online. **personal data?
If so, have you gained (written) consent from data subjects to collect this
information?**
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
**Data storage (including backup): where?** The dataset will be preserved in
ZABALA’s server. **For how long?**
</td> </tr> </table>
## EFFECT Events (workshops, webinars, events) subscribers
<table>
<tr>
<th>
**Data identification**
</th> </tr>
<tr>
<td>
**Dataset description** This dataset contains the names, contacts, email
addresses, of
users interested subscribing to EFFECT events (webinars, workshops, Meet&Match
events).
</td> </tr>
<tr>
<td>
**Source** This dataset is automatically generated when users sign up to the
event form available on the project website or on the event website.
</td> </tr>
<tr>
<td>
**Partners responsibilities**
</td> </tr>
<tr>
<td>
**Partner owner of the data** YOURIS, in case of subscription on the EFFECT
project website. The organizers of events, not directly managed by EFFECT, in
case of online subscription on their websites.
</td> </tr>
<tr>
<td>
**Partner in charge of the data collection** All partners (depending on the
organisation of the specific event)
</td> </tr>
<tr>
<td>
**Partner in charge of the data analysis** All partners (depending on the
organisation of the specific event)
</td> </tr>
<tr>
<td>
**Partner in charge of the data storage** All partners (depending on the
organisation of the specific event)
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
**Info about metadata (production and** N/A **storage dates, places) and
documentation**
</td> </tr>
<tr>
<td>
**Standards, format, estimated volume of** This dataset can be imported from,
and exported to a CSV, TXT or **data** Excel file.
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
The mailing list will be used for informing the interested users **Data
exploitation (purpose/use of the** about a specific event organised within the
framework of the **data analysis)** project.
</td> </tr>
<tr>
<td>
As it implies personal data, the access to the dataset is restricted **Data
access policy / Dissemination level:** to EFFECT consortium. **confidential
(only for members of the Consortium and the Commission Services)** **or
Public**
</td> </tr>
<tr>
<td>
**Data sharing, re-use, distribution,** None **publication**
</td> </tr>
<tr>
<td>
The mailing list contains personal data (names and email ** Personal data
protection: are they ** addresses). People interested in the event
voluntarily register, **personal data? If so, have you gained** through the
project website or the event page, to receive info
**(written) consent from data subjects to** about the event. They can
unsubscribe at any time. **collect this information?**
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
**Data storage (including backup): where?** The dataset will be preserved in
the interested partners’ server. **For how long?**
</td> </tr> </table>
# Conclusions
The update of the Data Management Plan has provided an overview of the
management of data that the EFFECT project has produced together during two
years of activity.
The analysis contained in this report has allowed the sound implementation of
procedures and management of data implemented by EFFECT.
Nearly all project partners have been owners or/and producers of data, which
implies specific responsibilities, described in this report, here including
the application of the new GDPR rules both at project level and at individual
partner level.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1253_EMPHASIS-PREP_739514.md
|
# Executive Summary
## Objectives
EMHASIS should establish rules on data property, sharing and right of the
first use, including necessary metadata for fair sharing in accordance to the
EU policy of open data within the data management plan. This involves the
property of data, duration of confidentiality for data involving private
companies, principles that address consortium agreements in EU or national
projects and of novel methods and techniques.
## Main results
An operational Data Management Plan (DMP) is currently implemented within the
related EPPN2020 project providing transnational Access and addressing data
management. Within EMPHASIS a DMP which can serve as a template for further
partner activities will be elaborated to reduce the loss of data and increase
the availability and reusability of data according to the legal and ethical
standards. In general plant phenotyping data are generated during the research
process, that are used for it, or that are the result of the research process
and may occur in different media types and formats (as numerical data, images,
documents, texts etc.) during the data life cycle. The metadata in this
context are important for the re-use of research data ensuring also the
possible verification of research results by third parties addressed by
Minimum Information on Plant Phenotyping Experiments (MIAPPE). In general
research data should be published and EMPHASIS will strongly encourage that
data should be published in a way that access to these data is enabled and
possible, e.g. via a web interface or a citable data publication.
Research activities between two or multiple partners will be supported by
agreements including data management issues, currently a template of such an
agreement is used within the EPPN2020 projects and will be adapted for
EMPHASIS.
# The goal of the Data Management Plan
A Data Management Plan (DMP) describes the management of datasets generated
and processed during and after the use of EMPHASIS services such as access to
plant phenotyping facilities to perform dedicated experiments. As EMPHASIS
goes beyond a simple H2020 project, there are specific requirements and
sustainability issues that need to be addressed. This DMP shall develop a
general framework to help manage data, meet funder requirements, and
facilitate multiple use of data by the scientific community. As such, the DMP
ranges from data acquisition to the evaluation, processing up to archiving and
publication of the data, where appropriate. The goal is to make these data
“findable, accessible, interoperable and reusable” (FAIR), which is becoming
increasingly important in many research projects, requiring a proof of
structured data management embedded in a sound data policy during and after
completion of a research project. Specifically, the ERC Scientific Council
recommends to “ _all its funded researchers that they follow best practice by
retaining files of all the research data they have produced and used during
the course of their work, and that they be prepared to share these data with
other researchers whenever they are not bound by copyright restrictions,
confidentiality requirements, or contractual clauses.”_ 1 As such a DMP
should:
* reduce the risk of data loss
* make data available and reusable
* promote the implementation of ethical standards and principles of Good Scientific Practice
* create legal certainty
* improve data exchange within research groups
Practical implementation of a DMP in plant phenotyping and as such highly
relevant for EMPHASIS has been performed in the I3 project EPPN2020:
_https://eppn2020.plantphenotyping.eu/Data_Policy_
# Data description
The general aim is that data are interoperable between platforms and that
querying mechanisms are standardized. All platforms will collect data in such
a way that this is eventually possible, and tools will be progressively
deployed in the project sites. The scientific managers of phenotyping
facilities shall be responsible for managing the data and ensure that the DMP
is carried out.
Research data, in very general terms, refer to all data that are generated
during the research process, that are used for it, or that are the result of
the research process. Depending on the specific research question and methods
used, data are generated, obtained or collected, observed, simulated, derived,
validated, processed, analysed, published and finally archived. These data
therefore may occur in different media types and formats, the aggregation and
quality of these data depends on the stage in the life cycle of the data.
Specifically, data collected in phenotyping experiments can be numerical data,
images, documents, texts or manual measurements. The data need to be
complemented by metadata that describe the research data. This may include
includes authorship, contact data, time of creation, it defines the data
formats used and contains the context that led to the data. Additionally,
metadata should include settings of experimental sensors or instruments,
environmental conditions, comments, and measurement uncertainties etc. The
metadata is essential for the re-use of research data ensuring also the
possible verification of research results by third parties.
In plant phenotyping the Minimum Information About a Plant Phenotyping
Experiment (MIAPPE) comprises both a conceptual checklist of metadata required
to adequately describe a plant phenotyping experiment, and software to
validate, store and disseminate MIAPPE-compliant data. 2 Within EMPHASIS, we
strongly encourage to follow the recommendations outlined in MIAPPE.
Examples of data categories and sources that need to be managed in a
phenotyping experiment are:
* Resource description: information of genotype, seed, accession
* Facilities: installations, sensors, cameras, conveyors and specific devices
* Trait recovery work-flows: sensor and image analysis methods and software tools used to extract traits from raw image and other data
* Phenotypic data at plant or population level, e.g., image-based traits, phenological stages, manual measurements
* Environmental conditions as collected by sensors (e.g. soil water status, air temperature or evaporative demand)
* Date and description of management events (e.g. irrigation, pruning, sampling) and of observations (e.g. plant disease, accidents)
* Characteristics of experiments e.g. design, protocol and organisation
# Data publication
Basic utilisation of research data is a publication, specifically in the
academic context. Data are exploited and published by the scientists
responsible for them, unless there are substantial grounds for not doing so.
This first use of data is usually a publication in a specific journal, the
extent and duration of the first exploitation can be determined in the DMP and
the end of the first exploitation phase should be context-dependent and
justified. For example within the EPPN2020 project, “data will be published
whenever possible, and made available either after publication or at the
latest three years after the last day of the experiment.” 3
Within EMPHASIS we will strongly encourage that data should be published in a
way that access to these data is enabled and possible, e.g., via a web
interface or a citable data publication. An important technical tool for
publishing research data is a data repository, a server service on which data
can be uploaded by the data creators. This data should get a worldwide unique
identifier (e.g.
a DOI) and can be searched and downloaded. There is a number of different
repositories e.g.:
* topic specific repository (esp. for plant science and phenotyping), e.g. e!DAL 4
* generic repository, e.g. zenodo 5 , dryad 6
# Legal status of data
Research data may have an intellectual level of creation worth of protection.
In order to ensure legal certainty, it should always be assumed that
protection is important, i.e. that usage and exploitation rights are clarified
contractually with external partners. EMPHASIS will strongly support the
establishment of contractual agreements in research projects to ensure fair
exploitation of data. Currently, there is e a template for an agreement for
bilateral project within the I3 project EPPN2020 to be used within the
Transnational Access and to clearly define the roles as well as rules for data
use and reuse by the access provider and user.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1254_ACTRIS PPP_739530.md
|
# Introduction to The ACTRIS Data Centre and ACTRIS Data Management Plan
The Aerosol, Clouds and Trace Gases Research Infrastructure (ACTRIS) focuses
on producing high-quality data for the understanding of short-lived
atmospheric constituents and their interactions. These constituents have a
residence time in the atmosphere from hours to weeks. The short lifetimes make
their concentrations highly variable in time and space and involve processes
occurring on very short timescales. These considerations separate the short-
lived atmospheric constituents from long-lived greenhouse gases, and calls for
a four dimensional distributed observatory. The Research Infrastructure (RI)
ACTRIS is the pan-European RI that consolidates activities amongst European
partners for observations of aerosols, clouds, and trace gases and for
understanding of the related atmospheric processes, as well as to provide RI
services to wide user groups (See the Stakeholder Handbook for more
information).
ACTRIS data are data from observational or exploratory National Facilities
complying with the procedures established within ACTRIS.
ACTRIS observational platforms are fixed ground-based stations that produce
long-term data based on a regular measurement schedule and common operation
standards. These platforms perform measurements of aerosol, clouds, and
reactive trace gases from the Earth surface throughout the troposphere up to
the stratosphere by applying state-of-the-art remote-sensing and in situ
measurement techniques under consideration of harmonized, standardized, and
quality controlled instrumentation, operation procedures and data retrieval
schemes. The sites are strategically located in diverse climatic regimes both
within and outside Europe, and many of them contribute to one or several
European and international networks, such as EMEP, NDACC, or GAW, and are
possibly partly shared with other environmental infrastructures, such as ICOS,
SIOS, ANAEE or eLTER.
Laboratory platforms and mobile platforms that perform dedicated experiments
and contribute data on atmospheric constituents, processes, events or regions
by following common ACTRIS standards are considered ACTRIS exploratory
platform. In addition to these,atmospheric simulation chambers are ACTRIS
exploratory platforms too. These chambers are among the most advanced tools
for studying and quantifying atmospheric processes and are used to provide
many of the parameters incorporated in air quality and climate models.
Atmospheric simulation chamber data contribute to better predict the behavior
of the atmosphere over all time scales through a detailed understanding of the
physical and chemical processes, which affect air quality and climate change.
Atmospheric simulation chambers are among the most advanced tools for studying
and quantifying atmospheric processes and are used to provide many of the
parameters incorporated in air quality and climate models.
Figure 1: Overview of the types of National Facilities providing data to
ACTRIS Data Centre
8
ACTRIS is a unique RI improving both the quality of and access to atmospheric
observations, developing new methods and protocols, and harmonizing existing
observations of the atmospheric variables listed in Appendix 1. Appendix 1
includes an updated list of all ACTRIS variables associated to recommended
measurement methodology.
## The mission, overall goal and structure of the ACTRIS Data Centre
The mission of the ACTRIS Data Centre (DC) is to compile, archive and provide
access to well documented and traceable ACTRIS measurement data and data
products, including digital tools for data quality control, analysis,
visualisation, and research. As a tool for science, the highest priorities for
the ACTRIS DC are to maintain and increase the availability of ACTRIS data and
data products relevant to climate and air quality research for all interested
users.
The overall goal of the ACTRIS Data Centre (DC) is to provide scientists and
other user groups with free and open access to all ACTRIS data, complemented
with access to innovative and mature data products, together with tools for
quality assurance (QA), data analysis and research. ACTRIS data and products
should be findable, accessible, interoperable and reusable (FAIR), and the
data centre work towards fulfilling the FAIR principles. The numerous
measurement methodologies applied in ACTRIS result in a considerable diversity
of the data collected. In accordance with these requirements, the ACTRIS DC
will be organized in 6 Units, with clear links and procedures for interaction
between the data centre Units, National Facilities (NFs) and topical centres
(TCs). The ACTRIS DC will be coordinated by the ACCESS unit leader and all
data is linked through the ACTRIS data portal serving as the access point to
all data and related information. The units and short names are:
* ACTRIS data and services access unit (ACCESS)
* ACTRIS In situ data centre unit (In-Situ)
* ACTRIS Aerosol remote sensing data centre unit (ARES)
* ACTRIS Cloud remote sensing data centre unit (CLU)
* ACTRIS trace gases remote sensing data centre unit (GRES)
* ACTRIS Atmospheric simulation chamber data centre unit (ASC)
During the ACTRIS implementation phase (expected 2020-2024), the Central
Facilities will be constructed and their services tested. The ACTRIS Central
Facilities host selection was a part of ACTRIS PPP, and the following
consortium is selected to host the ACTRIS Data Centre, and the various units
with services to data producers and data users.
Figure 2: Architecture of the ACTRIS Data Centre
10
Hosting
Name of Central institution
Facility and and
associated Unit contribution Main activities
<table>
<tr>
<th>
ACTRIS data and services access unit (ACCESS)
</th>
<th>
NILU (lead),
CNRS,
MetNo, BSC
</th>
<th>
ACTRIS web interface for data, services and tools, called “The ACTRIS Data
Centre”. Main activities are discovery and access to ACTRIS data and data
products, digital tools provided by the topical centres and the data centre
units, documentation, access to software and tools for data production. Offer
visualisation of ACTRIS data products. Data production of selected Level 3
data and synergy data products. The data centre will offer bridge to external
data bases and sources.
</th> </tr>
<tr>
<td>
ACTRIS In-Situ data centre unit
(In-Situ)
</td>
<td>
NILU
</td>
<td>
Data curation service for in situ data: all aerosol, cloud and trace gas in
situ data. This comprises inclusion of data in the data base EBAS, archiving
and documentation. Support for centralized data processing, harmonization,
traceability, quality control and data product generation. Training and online
tools for QA, QC. The activity enables RRT and NRT delivery.
</td> </tr>
<tr>
<td>
ACTRIS Aerosol remote sensing data centre unit
(ARES)
</td>
<td>
CNR (lead),
CNRS
</td>
<td>
Aerosol remote sensing data processing and curation. This includes centralized
processing, traceability, harmonization and data versioning, quality control,
data archiving in EARLINET DB, data provision and documentation. The activity
enables RRT and NRT delivery. Tutorial activities. Production of level 3 data
for climatological analysis and new products.
</td> </tr>
<tr>
<td>
ACTRIS Cloud remote sensing data centre unit
(CLU)
</td>
<td>
FMI
</td>
<td>
Data curation service for cloud remote sensing data. Support for centralized
cloud remote sensing data processing, traceability, harmonization, automated
quality control and product generation, and data archiving. Enables RRT and
NRT delivery. Production of level 3 data for NWP model evaluation.
</td> </tr>
<tr>
<td>
ACTRIS Atmospheric simulation chamber data centre unit
(ASC)
</td>
<td>
CNRS
</td>
<td>
Data curation service for atmospheric simulation chamber data. This includes
standardized process for data submission, quality control, inclusion of data
in the AERIS data base, search metadata creation and provision and archiving.
</td> </tr>
<tr>
<td>
ACTRIS trace gases remote sensing data centre unit
(GRES)
</td>
<td>
CNRS
</td>
<td>
Data curation service for reactive trace gases remote sensing data. This
comprises standardized process for data submission, quality control, inclusion
of data in the AERIS data base, metadata creation and provision and archiving.
Production of level 3 data for
</td> </tr> </table>
Hosting
Name of Central institution
Facility and and
associated Unit contribution Main activities
climatological analysis, and added values products (quicklooks, links to EVDC
- ESA Atmospheric Validation Data Centre).
Table 1: Short description of the ACTRIS DC units and the research performing
organizations leading and contributing to the units.
## The overall goal and structure of ACTRIS Data Management Plan
The ACTRIS Data Management Plan (DMP) should document the key elements of the
ACTRIS data management life cycle, and the plans for the data collected,
processed and/or generated. The goal of the DMP is to describe the present
situation and the operational ACTRIS data center. Furthermore the DMP should
also describe the technical solutions agreed, that are currently under
implementation, and outline the strategy and development needed towards making
ACTRIS data FAIR at ACTRIS Data Centre Level.
The ACTRIS DMP is a "living" online document which is set up to be machine-
actionable as a part of the FAIR data ecosystem. The DMP should be a hub of
information on ACTRIS FAIR digital objects. The goal is to make the ACTRIS DMP
accessible for all stakeholders (repository operators, funders, researchers,
publishers, infrastructure providers etc.) by making it available and
accessible for both humans and machines. We currently use GitHub as the
platform for collaboration on the DMP, this enables all actors working with or
within ACTRIS to directly contribute and suggest changes to the document.
Furthermore, the ACTRIS Data Management Plan should follow the glossary of
terminology and definitions used in ACTRIS.
# ACTRIS data and ACTRIS data levels
ACTRIS data are data from observational or exploratory National Facilities
complying with the procedures established within ACTRIS. ACTRIS data comprises
ACTRIS variables resulting from measurements at National Facilities that fully
comply with the standard operating procedures (SOP), measurement
recommendations, and quality guidelines established within ACTRIS. The ACTRIS
atmospheric variables are listed in Appendix I, associated to the
corresponding recommended methodology.
There are 4 levels of ACTRIS data: * ACTRIS level 0 data: Raw sensor output,
either mV or physical units. Native resolution, metadata necessary for next
level. * ACTRIS level 1 data: Calibrated and quality assured data with minimum
level of quality control. * ACTRIS level 2 data: Approved and fully quality
controlled ACTRIS data product or geophysical variable. * ACTRIS level 3 data:
Elaborated ACTRIS data products derived by post-processing of ACTRIS Level 0
-1 -2 data, and data from other sources. The data can be gridded or not.
Additionally to these data products which are completely under the control of
ACTRIS with established procedures and standards, the ACTRIS DC will also
produce additional data products of interest of the the scientific and user
communities. These are ACTRIS synthesis product: data products from e.g.
research activities, not under direct ACTRIS responsibility, but for which
ACTRIS offers repository and access.
Figure 3: ACTRIS data levels
The list of ACTRIS variables are expected to increase during the progress of
ACTRIS. Level 3 data products are expected to increase in quantity and number
of variables because of the expected increase in ACTRIS data synergistic usage
with other datasets. Additionally the expected technological and
methodological developments fostered by ACTRIS itself will increase the ACTRIS
observational capabilities and therefore the number and quality of observable
atmospheric related variables (Level 1 and Level 2 products).
# Data summary of the ACTRIS data centre
The purpose of the data collection/generation
Primary goal of ACTRIS is to produce high quality integrated datasets in the
area of atmospheric sciences and provide services, including access to
instrumented platforms, tailored for scientific and technological usage. The
purpose of the data collection and generation of data products in ACTRIS is to
provide open access to aerosol, cloud and trace gas in situ and remote sensing
measurements of high quality, benefiting a large community of scientists
involved in atmospheric science and related areas as well as policy makers,
the private sector, educators and the general public.
See the Stakeholder Handbook for more information.
The relation to the objectives of the project as stated in Stakeholder
Handbook
The primary goal of ACTRIS is to produce high quality integrated datasets in
the area of atmospheric sciences and provide services, including access to
instrumented platforms, tailored for scientific and technological usage. The
main objectives of ACTRIS are:
* to provide information on the 4D-compositon and variability and of the physical, optical and chemical properties of short-lived atmospheric constituents, from the surface throughout the troposphere to the stratosphere, with the required level of precision, coherence and integration;
* to provide information and understanding on the atmospheric processes driving the formation, transformation and removal of short-lived atmospheric constituents;
* to provide efficient open access to ACTRIS data and services and the means to effectively use the ACTRIS products;
* to ensure and raise the quality of data and use of up-to-date technology used in the RI and the quality of services offered to the community of users, involving partners from the private sector; and
* to promote training of operators and users and enhance linkage between research, education and innovation in the field of atmospheric science.
Management of ACTRIS data relates to measuring atmospheric composition and the
ability to predict the future behavior of the atmosphere over all time scales.
High quality observational data harmonized across the countries and continents
facilitates this, and needs to be supported by:
* Documentation of archiving procedures and access to level 0 -> level 3 data produced by the National Facilities (NFs), Topical Centres (TCs), and Central Facilities (CFs)
* Documented and traceable processing chain of level 0 data
* Documented, traceable processing and long-term archiving and preservation of all ACTRIS level 1 to level 3 data and data products
* Access to ACTRIS data, data products, and digital tools through a single point of entry
* Documentation of data, data flow, citation service, and data attribution, including version control, data traceability, and interoperability,
* Data curation and support for campaigns and dedicated research projects and initiatives, external or internal to ACTRIS.
Main users of ACTRIS data and software
ACTRIS will produce data and data products essential to a wide range of
communities as described in detail in the Stakeholder Handbook, section
“Users” including:
* Atmospheric science research communities world-wide
* The climate and air-quality, observational/ experimental/ modelling/ satellite communities, national and international research programmes and organisations;
* Environmental science research communities and communities from other neighboring fields: hydro-marine, bio-ecosystem, geosciences, space physics, energy, health, and food domain, to study interactions and processes in across different disciplines;
* Instrument manufacturers and sensor industries for development, testing, prototyping and demonstration;
* Operational services, National weather services, climate services for model validation, weather and climate analysis and forecasting;
* Space agencies for validation and the development of new satellite missions;
* National and regional air quality monitoring networks and environmental protection agencies for air quality assessments and validation of air pollution models;
* Policy makers and local/ regional/ national authorities for climate, air-quality, health and atmoshperic hazards related information for decision making and policy development.
* Copernicus atmospheric monitoring service (ECMWF)
* Science community working on air quality, climate change and stratospheric ozone depletion issues
## ACTRIS In situ data centre unit (In-Situ)
The In-Situ data centre unit provides data curation service for aerosol, cloud
and trace gas in situ data, as well as archiving of tihis data using the EBAS
database. This comprises tools for harmonized data submission and meta data
templates, inclusion of data and meta data in the data base, traceability,
harmonization and data versioning, quality control, archiving, documentation
and data provision. Training and online tools for QA, QC are offered. The
activity enables RRT and NRT data compilation and delivery and provides
tutorial activities. Furthermore, support for centralized data processing,
harmonization, and data product generation, both level 2 and level 3 is
offered and further implemented during the implementation phase.
The types and formats of data generated/collected
The ACTRIS In-situ data centre unit is supported by the EBAS database
infrastructure. In situ data submitted to ACTRIS need to be formatted in the
EBAS NASA-Ames format (ASCII file) by the data originator. There are exsisting
instructions and templates for each instrument/group of instruments. The EBAS
NASA-Ames format is based on the ASCII text NASA-Ames 1001 format, but
contains additional metadata specifications ensuring proper documentation from
the EBAS-Submit documentation website as well as tools for file-generation
(beta) and file-submission.
ACTRIS in situ data is also available in netCDF 4 format through the EBAS
Thredds Server, following the CF 1.7 convention and the Attribute Convention
for Data Discovery 1-3 (ACDD).
Re-use of existing data
The ACTRIS data user interface will include access to aerosol and trace gas in
situ legacy data resulting from ACTRIS pre-projects (for In-Situ CREATE,
EUSAAR, ACTRIS-FP7. These will also be included as a part of the ACTRIS In
Situ data centre unit. Legacy data resulting from ACTRIS pre-projects will be
available in the same format as current products.
The origin of the data
The origin of the data is derived from instrument raw data, either through
online or offline observations. The expected size of the data
<table>
<tr>
<th>
Type
</th>
<th>
Number of annual datasests (end 2019)
</th>
<th>
Number of annual datasets (min by 2025)
</th>
<th>
Number of annual datasets (max by 2025)
</th> </tr>
<tr>
<td>
ACTRIS in situ aerosol data
</td>
<td>
60
</td>
<td>
50
</td>
<td>
120
</td> </tr>
<tr>
<td>
ACTRIS in situ cloud data
</td>
<td>
0
</td>
<td>
35
</td>
<td>
105
</td> </tr>
<tr>
<td>
ACTRIS in situ trace gas data
</td>
<td>
27
</td>
<td>
30
</td>
<td>
60
</td> </tr>
<tr>
<td>
Table 2: Number of annual datasets
Data volume (end
Type 2019)
</td>
<td>
Data volume (min by 2025)
</td>
<td>
Data volume (max by 2025)
</td> </tr>
<tr>
<td>
ACTRIS in situ aerosol data
</td>
<td>
18 000 MB
</td>
<td>
15 000 MB
</td>
<td>
50 000 MB
</td>
<td>
</td> </tr>
<tr>
<td>
ACTRIS in situ cloud data
</td>
<td>
0 MB
</td>
<td>
1 GB
</td>
<td>
3 GB
</td>
<td>
</td> </tr>
<tr>
<td>
ACTRIS in situ trace gas data
Table 3: Data volume
</td>
<td>
300 MB
</td>
<td>
200 MB
</td>
<td>
400 MB
</td>
<td>
</td> </tr>
<tr>
<td>
Data utility
</td>
<td>
</td> </tr> </table>
According to IPCC AR5, aerosol particles in the atmosphere are the most
prominent source of uncertainty of climate predictions. Depending on their
properties, they can have a warming as well as cooling effect on climate by
scattering and absorbing solar radiation, and they can extend brightness and
lifetime of clouds. Volatile organic compunds (VOCs) are one source of
precursors for aerosol particles by forming condensable vapours during
oxidation and decay in the atmosphere. By interaction with nitrogen oxides
(NO_x), themselves a pollutant emitted by combustion, decaying VOCs form ozone
(O_3), another pollutant. All of these, particulate matter, VOCs, NO_x, and
ozone, have adverse effects on human health. Data on concentrations and
properties of these compounds stored in the ACTRIS DC In Situ unit address all
of the named effects: * In Situ data feed into the IPCC assessment reports in
order to quantify and reduce the uncertainty of climate change. * In Situ data
are the basis of national and international assessment reports of air quality.
* In Situ data feed into and validate operational air quality prediction
products, e.g. by Copernicus.
Outline of data life cycle (workflow and workflow diagram)
Detail on the data life cycle and workflow (workflow diagrams for data
production) for in situ observations can be found in Appendix 3: ACTRIS in
situ aerosol, cloud and trace gas data lifecycle and workflow (draft).
## ACTRIS Aerosol remote sensing data centre unit (ARES)
The ARES data centre unit provides a data curation and data processing service
for aerosol remote sensing data coming frm lidar and photometer obsrevations.
This includes centralized data processing, data storage, recording of metadata
in a dedicated RDBMS, traceability, harmonization and data versioning, quality
control, documentation and data provision. The unit allows for RRT and NRT
data provisioning and offers support and training activities. Furthermore,
level 3 data production for climatological analysis and the delivery of new
data products will be further implemented and offered during the
implementation phase.
The main goal is providing access of high quality and document datasets of the
aerosol optical properties vertical distribution in the whole troposphere and
upper stratosphere with short time resolution. This long term dataset
collected at continental scale allows: - investigation of the relationship
between nearsurface processes (as pollution or air quality issues) and
atmospheric aerosol contents; - addressing the challenging issue of direct and
indirect effects of aerosol in the climate change.
The types and formats of data generated/collected
The ACTRIS ARES data centre unit is built on the heritage of the EARLINET Data
base infrastructure and integartes the photometer aerosol data processing.
Aerosol remote sensing data submitted to ACTRIS need to be compliant to a
specific format established by the ARES unit centralized processing suite. All
further data levels are produced by the ARES processing suite. ARES provides
data compliance with NetCDF4, following Climate Forecast (CF) 1.7 conventions.
* ARES Level 1 data products consist of high and low resolution total attenuated backscatter and volume depolarization ratio time series provided in NRT or RRT. provided by the photometer observation are also available. Additionally ARES provides columnar information and synergistic lidar/photometer products as vertical profiles of aerosol microphysical properties as Level 1 data.
* ARES Level 2 data products contain fully quality assured aerosol extinction, backscatter, lidar ratio, Angstrom exponent and depolarization ratio vertical profiles and full quality controlled columnar information and aerosol microphysical properties profiles.
* ARES Level 3 data products are retrieved from the level 2 data and provide statistical analysis (including seasonality and annuality) of the most important aerosol optical parameters.
Re-use of existing data
The ACTRIS data user interface will include access to aerosol remote sensing
legacy data resulting from ACTRIS pre-projects (for ARES EARLINET, EARLINET-
ASOS). These will also be included as a part of the ACTRIS ARES data centre
unit. Legacy data resulting from ACTRIS pre-projects will be available in the
same format as current products.
The origin of the data
The origin of the data is derived from instrument raw data provided by the
data originators in a common format (NetCDF).
The expected size of the data
Table 4: Number of annual datasets
Table 5: Data volume
Data utility
Atmospheric aerosols are considered one of the major uncertainties in climate
forcing, and a detailed aerosol characterization is needed in order to
understand their role in the atmospheric processes as well as human health and
environment. The most significant source of uncertainty is the large
variability in space and time. Due to their short lifetime and strong
interactions, their global concentrations and properties are poorly known. For
these reasons, information on the large-scale three-dimensional aerosol
distribution in the atmosphere should be continuously monitored. It is
undoubted that information on the vertical distribution is particularly
important and that lidar remote sensing is the most appropriate tool for
providing this information. ARES data products are particularly useful for
addressing important issues like validation and improvement of models that
predict the future state of the atmosphere and its dependence on different
scenarios describing economic development, including those actions taken to
preserve the quality of the environment.
* ARES Level 1 data are particularly interesting for several applications such as model assimilation and monitoring of special/critical events (volcanic eruptions, dust intrusions, ...).
* ARES Level 2 data allow for an optimal and complete optical and microphysical characterization of atmospheric aerosol. This is the fundamental starting point for any study regarding the assessment of aerosol in many atmospheric processes (climatology, climate change, Earth radiative budget, aerosol layer characterization, long range transported aerosol processes).
* ARES Level 3 data are climatological products providing statistical analysis of aerosol optical parameters. These products are useful for the characterization of different sites all over Europe as well as to underline seasonalities, annualities and trends.
Outline of data life cycle (workflow and workflow diagram)
Details of the data life cycle and workflow (workflow diagrams for data
production) for aerosol remote sensing observations can be found in Appendix
4: ACTRIS aerosol remote sensing data lifecycle and workflow (draft).
## ACTRIS Cloud remote sensing data centre unit (CLU)
The CLU data centre unit provides data curation and data processing service of
cloud remote sensing data. This includes centralized processing, traceability,
harmonization and data versioning, quality control, data provision and
archiving, and documentation. The activity enables RRT and NRT data
compilation and delivery, and participation in training. Furthermore, data
product generation of level 3 data for forecast and climate model evaluation,
climatological analysis and new products is offered and further implemented
during the implementation phase.
The types and formats of data generated/collected
The ACTRIS CLU data centre unit is making use of the Cloudnet database
infrastructure. Cloud remote sensing data submitted to ACTRIS need to be in a
specified format compliant with the centralized processing suite. CLU provides
data compliant with netCDF 3 and netCDF4 formats as much as possible, and
following CF 1.7 convention. Level 0 data submitted to ACTRIS CLU are required
to be in a specified format compliant with the centralized processing suite.
All further data levels are produced by the CLU processing suite.
Re-use of existing data
The ACTRIS data user interface will include access to cloud remote sensing
legacy data resulting from ACTRIS pre-projects (for CLU Cloudnet). These will
also be included as a part of the ACTRIS CLU data centre unit. Legacy data
resulting from ACTRIS preprojects will be available in the same format as
current products.
The origin of the data
Data is derived from instrument raw data, coupled with thermodynamic profiles
from NWP model.
The expected size of the data
<table>
<tr>
<th>
Type
</th>
<th>
</th>
<th>
Number of annual datasests (end 2019)
</th>
<th>
Number of annual datasets (min by 2025)
</th>
<th>
Number of annual datasets (max by 2025)
</th> </tr>
<tr>
<td>
ACTRIS remote data
</td>
<td>
cloud sensing
</td>
<td>
11
</td>
<td>
15
</td>
<td>
25
</td> </tr> </table>
Table 6: Number of annual datasets
Data volume (end Data volume (min Data volume (max
Type 2019) by 2025) by 2025)
ACTRIS cloud remote 15 TB 50 TB 150 TB sensing data
Table 7: Data volume
Data utility
Clouds are highly variable in time, space, and in their macro- and
microphysical aspects. This variability directly impacts radiative transfer
and the hydrological cycle, and the accurate representation of clouds is
fundamental to climate and numerical weather prediction. CLU products are
particular valuable for investigating the response of cloud microphysical
processes to changes in other atmospheric variables (aerosol-cloud-
precipitation interaction), evaluating and developing the parametrization
schemes used to represent cloud in climate and numerical weather prediction
models, and for validating satellite products used in data assimilation.
CLU level 2 data are utilised by a large community of atmospheric scientists
and operational agencies, with products permitting both process studies and
model parametrization CLU level 3 comprises climatological products for
climate and forecast model evaluation, together with seasonal and diurnal
composites enabling the characterisation of cloud properties across Europe.
Outline of data life cycle (workflow and workflow diagram)
Details on the data life cycle and workflow (workflow diagrams for data
production) for remote sensing observations can be found in Appendix 5: ACTRIS
cloud remote sensing data lifecycle and workflow
(draft).
## ACTRIS trace gases remote sensing data centre unit (GRES)
The ACTRIS trace gases remote sensing data centre unit (GRES) is supported by
the AERIS data base (https://gres.aeris-data.fr). The GRES data centre unit
provides data curation service for reactive trace gases remote sensing data.
This includes data conversion processing, standardized process for data
submission, quality control, inclusion of data in the data base, search
metadata creation, data provision and archiving. In addition, data product
generation of level 3 for climatological analysis and added values products
(quicklooks, links to EVDC-ESA Atmospheric Validation Data Centre) is offered
and implemented during the implementation phase.
The ACTRIS-GRES unit is structured in one unique database including
measurements issued from five types of instruments: - FTIR: Fourier Transform
Infra-Red Spectrometry,
\- UVVIS: Ultra-Violet and Visible spectroscopy including - UV-VIS ZS (zenith-
sky) SAOZ (Sytème d’Analyse par Observation Zénithale) spectrometer, - UVVIS
MAX-DOAS (Multi-AXis Differential Optical Absorption Spectroscopy
instruments), - PANDORA instruments. - LIDAR DIAL: Differential Absorption
Lidar.
The types and formats of data generated/collected
Level 2 and level 3 trace gases remote sensing data produced are either
profiles (O3) or columns (O3, C2H6, HCHO, NO2, NH3 …) products. The level 2b
data are processed from the consolidation of level 2a data using quality
assurance and quality control procedures. The level 3 data are produced from
level 2b data, trace gas profiles or columns, and correspond to monthly
averaged climatologies as well as coincident data with satellite overpasses.
Level 2 and level 3 trace gases remote sensing data submitted to ACTRIS need
to be in GEOMS HDF data format (Generic Earth Observation Metadata Standard,
http://www.ndsc.ncep.noaa.gov/data/formats) following the appropriate GEOMS
template for FTIR, UVVIS and LIDAR measurements
(https://evdc.esa.int/tools.data-formatting-templates/). The GEOMS data format
allows the necessary requirements to setup the ACTRIS data curation service
for trace gas remote sensing data. The level 2 and level 3 data will be also
converted in NetCDF (https://www.unidata.ucar.edu/software/netcdf/)) version 4
format following the CF 1-7 (Climate Forecast) conventions and be
disseminated. The Climate and Forecast conventions are metadata conventions
for earth science data. The conventions define metadata that are included in
the same file as the data making the file "self-describing". Level 0 and 1
data submitted to ACTRIS GRES are required to be in a specified format
compliant with the centralized processing suite. All further data levels are
produced by the NF processing suite.
Re-use of existing data
The ACTRIS data user interface will include access to reactive trace gases
remote sensing legacy data data resulting from AERIS project (for NDACC data)
. These will also be included as a part of the ACTRIS GRES data centre unit.
Legacy data resulting from AERIS project will be available in the same format
as current products.
The origin of the data
The L2 data are derived from instrument raw data, through offline
observations. All the data processing is performed by NFs.
<table>
<tr>
<th>
</th>
<th>
The expected size of the data
</th>
<th>
</th> </tr>
<tr>
<td>
Type
</td>
<td>
Number of annual datasets (end 2019)
</td>
<td>
Number of annual datasets (min by 2025)
</td>
<td>
Number of annual datasets (max by 2025)
</td> </tr>
<tr>
<td>
ACTRIS-GRES
FTIR
</td>
<td>
276
</td>
<td>
624
</td>
<td>
3744
</td> </tr>
<tr>
<td>
ACTRIS-GRES SAOZ
</td>
<td>
2900
</td>
<td>
7250
</td>
<td>
14500
</td> </tr>
<tr>
<td>
ACTRIS-GRES MAX-DOAS
</td>
<td>
14600
</td>
<td>
14600
</td>
<td>
14600
</td> </tr>
<tr>
<td>
ACTRIS-GRES
PANDORA
</td>
<td>
37230
</td>
<td>
7665
</td>
<td>
10220
</td> </tr>
<tr>
<td>
ACTRIS-GRES LIDAR DIAL
</td>
<td>
400
</td>
<td>
100
</td>
<td>
200
</td> </tr> </table>
Table 8: Number of annual datasets
Data volume (end Data volume (min by Data volume (max by
Type 2019) 2025) 2025)
<table>
<tr>
<th>
ACTRIS-GRES FTIR
</th>
<th>
2,5 GB
</th>
<th>
25 GB
</th>
<th>
150 GB
</th> </tr>
<tr>
<td>
ACTRIS-GRES SAOZ
</td>
<td>
0,6 GB
</td>
<td>
6 GB
</td>
<td>
15 GB
</td> </tr>
<tr>
<td>
ACTRIS-GRES MAXDOAS
</td>
<td>
600 GB
</td>
<td>
3 TB
</td>
<td>
3 TB
</td> </tr>
<tr>
<td>
ACTRIS-GRES
PANDORA
</td>
<td>
1,7 TB
</td>
<td>
6 TB
</td>
<td>
10 TB
</td> </tr>
<tr>
<td>
ACTRIS-GRES LIDAR DIAL
Table 9: Data volume
</td>
<td>
0,4 GB
</td>
<td>
1 GB
</td>
<td>
2 GB
</td> </tr>
<tr>
<td>
Data utility
</td> </tr> </table>
The data of GRES could be used to monitoring the evolution of key
stratospheric gas trace such ozone under the effect of anthropogenic
emissions, climate change and natural events. The retrieval of previous data
by GRES will allow homogeneous data series to compute trends. Data could be
also used in support of validation of satellite measurements deployed by
international space agencies as well as models simulations.
Outline of data life cycle (workflow and workflow diagram)
Detail on the data life cycle and workflow (workflow diagrams for data
production) for trace gases remote sensing data can be found in Appendix 6.
## ACTRIS Atmospheric simulation chamber data centre unit (ASC)
The ASC data centre unit provides data curation service for data obtained from
experiments in atmospheric simulation chambers (ACTRIS exploratory platforms).
This includes tools for harmonized data submission and meta data templates,
inclusion of data and metadata in the database, traceability, harmonization
and data versioning, quality control, archiving, documentation and data
provision. The ASC unit is structured in three pillars:
* The Database of Atmospheric Simulation Chamber Studies (DASCS) provides access to experimental data (level 2 data), typically time-series of measured parameters during an experiment in a simulation chamber.
* The Library of Analytical Resources (LAR) provides quantitative analytical resources that include infrared spectra and mass spectra of molecules and derivatives (level 3 data).
* The Library of Advanced Data Products (LADP) provides different types of mature data products (level 3 data): rate constants of reactions, quantum yields and photolysis frequencies of trace gas compounds, secondary organic aerosol (SOA) yields, mass extinction/absorption/scattering coefficients and complex refractive index of aerosols, growth factors of aerosols and modelling tools. The detailed list of ACTRIS level 3 data products is given in Appendix 9\.
The types and formats of data generated/collected
The ACTRIS ASC data centre unit is making use of EUROCHAMP database
(https://data.eurochamp.org/) which is hosted by AERIS infrastructure. Data
submitted to the DASCS pillar have to be provided by NFs in a standard format,
called “EDF format” (EUROCHAMP Data Format) which is based on an ASCII text
format and contains additional metadata in a header. These data are completed
with rich metadata which are available from the website and give access to a
technical description of the chambers (size, volume, walls, irradiation
system), the experimental protocols used for the generation of the data, and
an “auxiliary mechanism” which provides the chamber-dependent parameters
affecting the observations. Currently, work is being conducted with regards to
providing tools for access and download of data also in the netCDF 4 format,
compliant with the CF 1.7 convention. This will be implemented during ACTRIS
implementation phase.
Level 3 data provided in LAR are IR and mass spectra in JCAMP-DX format which
is the standard format recommended by IUPAC for spectra. It is a 2D graphic
format based on ASCII format. Metadata are attached and made available through
the ACTRIS data user interface. These data are provided by NFs.
Level 3 data provided in LADP are of different types and have thus different
formats. However, each type of data is provided in a harmonized format. Most
of them are provided as a unique value with metadata attached.
Re-use of existing data
The ACTRIS data user interface will include access to atmospheric simulation
chamber legacy data resulting from ACTRIS pre-projects (for ASChere
EUROCHAMP-1, -2, and EUROCHAMP-2020). These will also be included as a part of
the ACTRIS ASC data centre unit. Legacy data resulting from ACTRIS preprojects
will be available in the same format as current products.
The origin of the data
Data provided in DASCS and LAR pillars are derived from instrument raw data
and data provided in LADP are produced from L2 data processing. All the data
processing is performed by NFs.
The expected size of the data
<table>
<tr>
<th>
Number of annual
Type datasets (end 2019)
</th>
<th>
Number of annual datasets (min by 2025)
</th>
<th>
Number of annual datasets (max by 2025)
</th> </tr>
<tr>
<td>
ACTRIS-ASC 200 DASCS
</td>
<td>
50
</td>
<td>
300
</td> </tr>
<tr>
<td>
ACTRIS-ASC 20
LAR
</td>
<td>
10
</td>
<td>
50
</td> </tr>
<tr>
<td>
ACTRIS-ASC 70 LADP
</td>
<td>
50
</td>
<td>
200
</td> </tr>
<tr>
<td>
Table 10: Number of annual datasets
Data volume (end
Type 2019)
</td>
<td>
Data volumes (min by 2025)
</td>
<td>
Data volume (max by 2025)
</td> </tr>
<tr>
<td>
ACTRIS-ASC DASCS
</td>
<td>
1,2 GB
</td>
<td>
1,5 GB
</td>
<td>
2,4 GB
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
ACTRIS-ASC LAR
</td>
<td>
67 MB
</td>
<td>
76 MB
</td>
<td>
120 MB
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
ACTRIS-ASC LADP
</td>
<td>
26 KB
</td>
<td>
200 KB
</td>
<td>
500 KB
</td>
<td>
</td>
<td>
</td> </tr> </table>
Table 11: Data volume
Data utility
Atmospheric simulation chamber data contribute to better predict the behavior
of the atmosphere over all time scales through a detailed understanding of the
physical and chemical processes which affect air quality and climate change.
ACTRIS-ASC unit gives access to different types of data and data products
essential to a wide range of communities. Many of these parameters are
incorporated in air quality and climate models.
* Level 2 data provided in DASCS are of high interest for a large community of users in atmospheric science research and related areas, as well as the private sector. In particular, they are largely used for modelling activities to develop and/or validate chemical schemes of atmospheric models.
* Level 3 data provided in the LAR are of high interest for a large community of users in atmospheric sciences, analytical chemistry and related areas, as well as the private sector. Indeed, quantitative chemical analysis of infrared spectra for complex mixtures requires access to standards for the calibration of instruments. However, as the chemical species formed by these processes are often very complex (and not commercially available), their spectra are not available in the “classical” databases of analytical chemistry, or are not useful due to their low resolution. To tackle this issue, the EUROCHAMP consortium has developed its own Library of infrared spectra and has made it freely available to the scientific communities.
* Level 3 data products provided in the LADP are especially useful for researchers working on atmospheric observations, as well as atmospheric model development and validation. It includes products for the development of chemical mechanisms in atmospheric models (e.g. rate coefficients, photolysis frequencies, SOA yields, vapor pressures, etc.), products for the retrieval of satellite data and for radiative transfer modelling (e.g.), and tools to generate oxidation schemes which are very useful to interpret field measurements as well as laboratory studies.
Outline of data life cycle (workflow and workflow diagram)
Detail on the data life cycle andA preliminary version of the data workflow
(workflow diagrams for data production) for Atmospheric Simulation chamber
data can be found in Appendix 7. The definition of this workflow is still
under progress and a finalized version will be available in 2020.
## ACTRIS data and services (ACCESS)
ACTRIS Data Center is a distributed data centre, and ACTRIS data and services
access unit (ACCESS) is responsible for organising access to measurement data
from the topic data centre units, and documentation of procedures as support
to observational and exploratory NFs. The ACCESS unit provides the ACTRIS web
interface for data download, services and digital tools as well as performing
data production of Level 3 data, and synergy data products.
The ACTRIS access web interface is called “The ACTRIS Data Centre” and
incldues a meta data catalogue. The main activities are Discovery and access
to ACTRIS data and data products, overview of digital tools provided by the
topical centres and the data centre units, documentation, software and tools
for data production. Visualisation of ACTRIS data products. Data production of
Level 3 data and synergy data products. The data centre also offers bridge to
external data bases and sources.
The ACTRIS ACCESS unit offer acecss to elaborated aerosol, cloud and trace gas
data products, issued of advanced multi-instrument synergistic algorithms,
long term reanalysis, modelling and satellite data and sources. These can be
produced within the ACCESS unit, topic data centre units, topic centres, or
extrnal contributions. The list of ACTRIS level 3 data products is detailed in
the Appendix II, and consiste of three main categories:
1. Level 3 data solely based on data from ACTRIS observational platforms
2. Level 3 data and tools from multi-source data integration services, employing external gr ound based measurement data
3. Level 3 data products involving regional and global model data
The types and formats of data generated/collected
The objective is that most of the level 3 data generated will be in NetCDF
data format and have metadata compliant to the NetCDF CF Metadata Conventions.
This format and metadata are widely used in the atmospheric science community,
and is supported by a lot of standard visualization and analysis tools.
Nevertheless, the collected data can come from external sources accordingly,
non standard formats may also be used. In these cases, they will be rather
kept in their original format.
Re-use of existing data
The generated products and online services available from ACTRIS-preproject
use existing ACTRIS L0-1-2, satellite and model data.
The origin of the data
The origin of the data is derived from ground-based and satellite
observations, retrieval algorithms and model simulations.
The expected size of the data
Table 12: Number of annual datasets
Table 13: Data volume
Generated (on-demand services)
Typical dataset Typical volume
Product per day per day
Satellite data subsets 100 100 MB
Transport modelling products for assessment of ... source regions
Colocation service of data from contributing 400 400 MB networks
Model Evaluation Service 30 300 MB
NWP Model Evaluation Service 120 100 MB
Data utility
Data from ACTRIS is contributing to better prediction of the behavior of the
atmosphere over all time scales through a detailed understanding of the
physical, optical and chemical properties of aerosols, clouds and trace gases
in the atmosphere, as well as data for improving knowledge of processes which
affect air quality and climate change.
ACTRIS data are very diverse, covering numerous measurement methodologies
resulting in a considerable diversity of the data collected. In accordance
with these requirements, the ACTRIS DC is organized in 6 Units.
The ACCESS unit utilize data from all the 5 topical DC units, and produce
level 3 products combining various data and also models prodcuibf new and
value added data producst. Accordingly, the ACCESS data utility can cover all
the more specific data utility sections described in sections 3.1 – 3.5.
Outline of data life cycle (workflow and workflow diagram)
Detail on the data life cycle and workflow (workflow diagrams for data
production) for level 3 data can be found in Detail on the data life cycle and
workflow (workflow diagrams for data production) for level 3 data can be found
in Appendix 8
# Data Management at the ACTRIS data centre
ACTRIS data and products should be findable, accessible, interoperable and
reusable (FAIR), and the data centre works towards fulfilling the FAIR
principles. This chapter is describing the operational ACTRIS Data Flow from
National Facilities (NF) to users now, decisions that are currently under
implementation, and the work and solutions that will be implemented during the
implementation phase (2020-2025). The section starts with a brief introduction
to ACTRIS Access strategy, then introduction to the data management system in
ACTRIS in section 4.1, including detailed descriptions of data flows within
each unit (4.1.1.-4.1.5). This is followed by sections describing detailed
solutions and implementation plans making ACTRIS data and products findable
(4.2), accessible (4.3), interoperable (4.4), and reusable (4.5).
## ACTRIS access and service policy
ACTRIS is offering access to a large variety of high-quality services offered
by ACTRIS facilities, to a wide range of users and needs, for scientific,
technological and innovation-oriented usage. Accordingly, ACTRIS has developed
an access strategy to give clear guidelines and describe the general
principles for access provided by ACTRIS to Users. When the ACTRIS services
are in operation, the Users will access the ACTRIS services through a single
entry point, as shown in Figure 4 below.
Figure 4: Overview of ACTRIS Access strategy
Virtual access is wide access to ACTRIS data and digital tools and does not
require a selection process. Virtual access to ACTRIS data and digital tools
is free access, and is given in compliance with the ACTRIS data policy for
data from ACTRIS labelled NFs. Competitive access is Physical or Remote access
to the ACTRIS Facilities, including access to specific services offered by the
Data Centre and shall be managed by the SAMU and requires a selection process.
This can e.g. be data centre services for comprehensive research campaigns or
large volume data delivery tailored for specific purposes.
## Introduction and overview of ACTRIS Data Management architecture
ACTRIS Data Management is handled by the individual data centre unit:
* ACTRIS In situ data centre unit for all aerosol, cloud and trace gas in situ data - In-Situ
* ACTRIS Aerosol remote sensing data centre unit - ARES
* ACTRIS Cloud remote sensing data centre unit - CLU
* ACTRIS Trace gases remote sensing data centre unit - GRES
* ACTRIS Atmospheric simulation chamber data centre unit – ASC
* ACTRIS data and service access unit - ACCESS
An overview of the elements in the data flow is shown in Figure 5\.
Figure 5: Overview of ACTRIS Data Centre components, links and main activities
for the various units
4.2.1 ACCESS role and data management
Access to quality controlled data from the topic data centre units is
organised by the ACTRIS data and service access unit (ACCESS). The ACCESS unit
includes a frequently updated metadata catalogue for identification of data,
and links to measurement data stored by the topical data centre units. ACCESS
also produces level 3 data, and organizes the catalogue of ACTRIS level 3
data, produced either by the topical data centre units or within ACCESS.
ACCESS is structuring and compiling access to services, tools and
documentation, and maintaining and further developing the web interface called
“ACTRIS Data Centre” (currently https://actris.nilu.no).
The tasks are summarized in Figure 5 above and include the organization of
ACTRIS level 3 data.
All data centre units are providing metadata, and interfaces for access to
data and metadata indexed in the current ACTRIS metadata catalogue, except for
ASC. Index to ASC data is under implementation, and with the aim to be ready
within first part of 2020. The metadata is used to identify and access data
through the ACTRIS Data Centre web portal. The metadata catalogue is regularly
updated, at least every night through automatic procedures. ASC unit has
developed their own metadata catalogue and data and metadata is currently
available through EUROCHAMP Data Centre portal.
Figure 7 the current technical architecture and the interface used between the
topical data center units, as well as ACCESS and the ACTRIS Data Centre web
interface with access for users.
• We use ISO19115 with WIS profile as starting point for metadata exchange.
The profile will be extended with ACTRIS specific metadata.
The current setup is a web portal with a database that collects metadata from
In Situ, ARES, CLU and GRES via custom web services, but currentyl machine-to-
machine access is not possible. Implementation of ASC is under development and
will be ready during 2020, and in the future, the aim is to harvest all ACTRIS
metadata in to a single metadata catalogue, providing discovery metadata for
all ACTRIS data using ISO19115 with the WIS metadata profile enabling machine-
to-machine access of ACTRIS metadata.
Figure 6: Overview of the tasks of the ACCESS unit
Figure 7: Technical architecture of the ACTRIS meta data portal
As visualized in Figure 6 ACCESS organizes the level 3 data. The collected and
generated level 3 datasets will be extended during the implementation phase,
and the complete list of variables under implementation is included in
Appendix 2. Details of the level 3 data production in operation is included in
Appendix 9.
Overview of when data is made available (level 2 data)
Date when data
is made Provsion DC Submission available by the of NRT
unit deadline DC unit data Comment
<table>
<tr>
<th>
In Situ
</th>
<th>
31th of May
</th>
<th>
31th of June
</th>
<th>
hourly
</th>
<th>
</th> </tr>
<tr>
<td>
GRES
</td>
<td>
Within 4 months after measurement
</td>
<td>
Within 4 months after
measurement
</td>
<td>
</td>
<td>
There is not a specific date for data submission and availability for GRES and
ASC unit. Example: for FTIR data, NF will deliver data every 1 to 3 month; and
15 days later the data will be available by the DC unit.
</td> </tr>
<tr>
<td>
ARES
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
ASC
</td>
<td>
</td>
<td>
</td>
<td>
Not applicable
</td>
<td>
</td> </tr>
<tr>
<td>
CLU
</td>
<td>
Automatic
</td>
<td>
1 day after
submission
</td>
<td>
</td>
<td>
</td> </tr> </table>
Table 14: Overview of when data is made available
4.2.2 In-Situ dataflow and data management
The data management of ACTRIS in situ aerosol, cloud, and trace gas variables
(listed in Appendix 1) follows a common workflow (see Appendix 3 for details).
The workflow is separated into 2 branches: * Online observations: Measurement
done directly on sample air stream immediately after sampling, measurement
reported by instrument while sample passes through or immediately after.
Instrument QA by on- and off-site comparisons to standard instruments /
primary standards. RRT data provision is possible and default. * Offline
observations: Measurement done on sample medium in which sample is collected.
Sample analysis usually disconnected from sample collection in time and
location. Sample handling is documented by a series of reports, leading to
final data product. QA on sample handling (e.g. field blanks) and analysis
(e.g. round-robin). Rapid delivery for data possible.
Figure 8: Simplified workflow of the ACTRIS In Situ data centre unit,
focussing on distribution of responsibilities and services to users.
35
If an offline analysis process has been sufficiently streamlined, it may be
described by the online workflow.
ACTRIS In situ concretises the ACTRIS data levels as follows: * Level 0: by
default, raw data as produced by the instrument, all main and instrument
status parameters provided by instrument, brought to a welldefined data
format. If needed to limit raw data volume, level 0 may be a condensed version
of instrument raw data. Discovery, use, provenance, and configuration metadata
attached, including all information needed for further data production, as far
as known at stations level (e.g. QC metadata). Instrument model specific, time
resolution native to instrument, temperature and pressure conditions as
provided by instrument. * Level 1: Contains physical property observed,
including measures of uncertainty. Instrument status parameters, QC
measurements and invalid data removed, quality control and postcalibrations
applied. Time resolution native to instrument. Content specific for measured
property, standard conditions of temperature and pressure. * Level 2: Contains
data averaged to a homogeneous time resolution, typically 1 h. Includes
measure of atmospheric variability, content specific for measured property.
4.2.2.1 General Characteristics of In Situ Data Production
All In Situ data products, level 1 (RRT) and level 2 (fully QCed), are
archived in the In Situ data repository, hosted in NILU’s EBAS database, and
made available through ACCESS. In Situ produces selected level 3 products from
these (Appendix 9).
As a general rule in ACTRIS in situ data QC, only data subject to instrument
malfunctions are flagged as invalid. Data with episodes, local contamination,
etc. are flagged as such, but not as invalid, to avoid limiting data use only
to certain use cases. Data level designations are used to distinguish between
data having received automatic QC, and data having undergone manual QC. When
calculating time averages in data production, any non-invalidating flags
occurring in the source data during the averaging period are copied to the
average result, while invalid data points in the source data are excluded from
the average. Temporal coverage flags are applied to the average.
The content of workflow tasks and the responsibilities for them are specified
in Appendix 3, separately for each In Situ instrument type. Certain
responsibilities common to both online and offline observations are by default
distributed between NF, TC, and DC as follows:
NF: * Conducts / participates in on-site, off-site QA measures, and round
robin exercises as specified by TC. * Reacts to feedback on instrument /
observation / analysis operation and data quality from both TC and DC within 1
week, and takes corrective action within 1 month.
TC: * Maintains operating procedures for each In Situ instrument type in his
responsibility. * Defines QA/QC measures necessary for an instrument type /
observation. * In collaboration with DC, specifies data and metadata items
contained in QA/QC measure reports for each instrument / observation type. *
Specifies actions to be executed in each workflow task box for each instrument
/ observation type. * Documents QA measures conducted by TC using the tools
jointly specified / provided by TC and DC.
DC:
DC Core services: * Archives all level 0, 1, and 2 data occurring during
workflow execution. * Archives level 3 data produced by In Situ. * Links data
to relevant QA/QC data. * Operation of data production and QC tools for ACTRIS
in situ data, administration of data production workflow ensuring homogeneous
data products, e.g. via a business workflow tool connecting NFs, TC, and DC. *
Archive for documentation of QA measure results throughout ACTRIS, setup of
infrastructure, and standards of operation, including identification of
documents * PID identification of all objects in ACTRIS workflows executions,
incl. data (pre-) products, software, humans, organisations, instruments,
including versioning, DOIs for level 2 data products. * Document provenance
throughout all ACTRIS workflows by use of standardised provenance scheme,
facilitating attribution of entities involved in workflow execution. *
Training events for data submitters to all data centre units * Reacts on
requests typically within 1 week / 1 month. * Deadline for publication of
data, end of August or 1 month after closing the last issues. * Documentation,
procedures, tutorials and tools, guidance and helpdesk available to NFs *
Access to ACTRIS In Situ level 0, 1, 2, 3 data. * Access to ACTRIS level 2
legacy data archived in the ACTRIS data repositories, will be accessible via
the ACTRIS web entry point. * Bridge to external ground-based observational
data relevant for ACTRIS * Climatology products for ACTRIS variables at
National Facilities across Europe. * Interoperability and link to other RIs
and initiatives. * Knowledge transfer and training on the use of data products
and tools * Monitoring task execution in unit, representing unit in ACTRIS
bodies. * Monitor workflow execution across In Situ TCs and DC, maintain and
update workflow elements. * Support to regional and global networks and
related initiatives. ACTRIS will support international frameworks in the field
of air quality and climate change, e.g. GAW including GALION, EMEP, and GCOS,
and further utilize and add value to satellite based atmospheric observation.
DC Added-value services: * Access to Software, digital tools and user support
for processing of ACTRIS data tailored for analysis and research * Aerosol
surface in situ data – combination of variables and instruments. Production
and distribution of surface in situ level 3 products (closure, model
comparison, full-range PNSD, PM mass from size distribution, key particle
optical properties at dry state). * Contribution to collocation service of
data from regional and global networks. Benchmark data products adding
complementary data from GAW and EMEP together with ACTRIS data. * Contribution
to Satellite data – combined with ground based ACTRIS data. On-demand
distribution of satellite data collocated with ACTRIS ground-based
observations * Contribution to Optimal interpolation and Gap filling tool.
DC Campaign services: * Provision of digital tools and data services during
observation campaigns. * Data curation and archive of campaign data. * Digital
tools and products for campaign support. * Campaign dashboard service.
4.2.2.2 Online In Situ Data Production
Already at the station, the raw data stream from the instrument is transcribed
to a homogeneous level 0 format, and annotated with discovery, use,
provenance, and configuration metadata. The level 0 data are transferred to
the ACTRIS DC at a RRT schedule (latest 3 h after measurement, normally 1 h).
At this point, the In Situ online workflow splits into 2 branches: 1. RRT data
production: incoming level 0 data are autoQCed for outliers, spikes,
calibration data, and automatically identifiable instrument malfunctions, and
flagged accordingly, yielding level 0b. From there, levels 1b and 1.5 (final
RRT data product) are produced. RRT data are offered through a data
subscription service. 2. Fully QCed data production: data are manually QCed
for outliers, spikes, calibration data, episodes (e.g. atmospheric transport,
local / regional contamination), and instrument malfunctions. Tools for manual
data QC are provided centrally. Manual QC is assisted by automatic pre-
screening of data, similar to the auto-QC for RRT data. There are 2 options
for organising the QC process, both are applied at least annually: 1. TC
review: data QC is conducted by NF and supervised by TC, and follows its own
sub-workflow. 2. NF review: data QC by an identified person under the
responsibility of the NF.
From the fully QCed level 0 data, i.e. level 0a, levels 1a and 2 (final data
product) are produced.
All fully QCed data are to be submitted to the In Situ DC unit on an annual
basis by 31 May of the year following the calendar year to be reported. If the
TC review option is used, NFs need to submit their initial QCed version to the
review process by 31 March of that year, where the review process is typically
supervised by the TC.
The content of workflow tasks and the responsibilities for them are specified
in Appendix 3, separately for each In Situ instrument type. The following
responsibilities specific to online observations are distributed between NF,
TC, and DC as such:
NF: * Operates the instrument according to TC recommendations. * Conducts data
QC as specified by TC, documents QC as specified jointly by TC and DC,
participates in data QC review, if applicable. * Conducts on-site QA measures
and calibrations as specified by TC, and documents them using the tools
specified and provided by TC. * Uses data acquisition and handling software as
provided / specified by TC. * Maintains infrastructure for RRT data transfer.
TC: * Implements and maintains data production and QC software for each In
Situ instrument type in his responsibility to NF. * Supervises on-site QA
measures and calibrations, provides specification and tools for documenting
these in a traceable way. * Conducts off-site QA measures on instruments as
required by instrument type. * In collaboration with DC, specifies data and
metadata items contained in data levels 0, 1, and 2 for each instrument type.
* Implements and maintains software executing task boxes in data production
workflow. * In collaboration with DC, specifies sub-workflow implementation
for data QC review, including review procedures and rules, if applicable. *
Conducts and supervises data QC review, if applicable.
DC:
DC Core services: * NRT, RRT data production. * Data submission & curation
service of online ACTRIS in situ data. * Secondary data QC before publication
of data (syntactic and semantic correctness of metadata, syntactic correctness
of data, parameter dependent completeness of metadata, completeness and
correctness of framework associations).
DC Added-value services: * Contribution to services co-ordinated by other
ACTRIS partners (source apportionment submicron organic aerosol, VOC source
attribution, cloud occurrence at cloud in situ NFs).
* PM retrieval @GAW sites globally. * Alert Service for National Facilities on instrument malfunctions.
4.2.2.3 Offline In Situ Data Production
In the offline workflow for ACTRIS in situ data, data production is centred
around the sample medium, following its way through the workflow: * Sample
Medium Pre-Treatment: Pre-heating, Impregnation, Weighing in, Pre-cleaning. *
Sample Medium Exposure: Transport to field station, Exposure in sampling
device, Transport to lab * Sample Preparation: Weighing out, Sample medium
extraction, Sample medium apportioning. * Sample analysis.
Again, the content of workflow tasks and the responsibilities for them are
specified in Appendix 3, separately for each In Situ offline instrument type.
The following responsibilities specific to offline observations are
distributed between NF, TC, and DC as such:
NF: * Conduct sample medium pre-treatment, sample medium exposure, sample
preparation, and sample analysis in accordance with procedures defined by TC.
* Document all sample handling steps in the documentation system specified and
implemented by TC, and operated by DC. * Evaluate sample according to TC
guidelines, using the tools specified and implemented by TC, and operated by
DC * Respond and act on quality control feedback by DC. * Participate in
quality assurance measured defined and conducted by TC, e.g. round-robin
exercises.
TC: * Specify guidelines and implement documentation system for sample medium
pre-treatment, sample medium exposure, sample preparation, and sample
analysis. * Specify guidelines and implement algorithm for sample evaluation.
* Specify guidelines for data quality control. * Specify procedures for,
implement, and conduct quality assurance measures as appropriate for
observation method, document them, and store results in QA measure database
operated by DC.
DC:
DC Core services: * Operate documentation system for sample medium pre-
treatment, sample medium exposure, sample preparation, and sample analysis
specified and implemented by TCs. * Operate archive for documentation of
sample handling steps. * Operate algorithm for sample evaluation. * Implement
and operate quality control step for offline level 2 data.
4.2.3 ARES dataflow and data management
At the present, the ACTRIS aerosol remote sensing component is a highly
inhomogeneous in terms of instrumentations: most of the lidar systems are
home-made or highly customized. In cases like that, the implementation of a
standard, centralized and quality assured scheme for the analysis of raw data
is the most efficient solution to provide FAIR and quality assured data at RI
level.
The SCC (EARLINET Single Calculus Chain) is the solution adopted by the ACTRIS
(Aerosol, Clouds and Trace gases Research InfraStructure Network) aerosol
remote data center to ensure homogenous, traceable and quality controlled
data. Main concepts at the base of the SCC are automatization and full
traceability of quality-assured aerosol optical products.
The ARES DC also compile aerosol optical and physical properties (profile and
column) from combined lidar + photometer observations collected at NFs. The
GARRLiC (Generalized Aerosol Retrieval from Radiometer and Lidar Combined
data) retrieval will be used for this, which synergistically inverts
coincident lidar and radiometer observations, starting from SCC products and
AERONET-ACTRIS processing stream products. These processing streams are fully
controlled by ACTRIS.
The data curation workflow is suitable for the provision in NRT and RRT,
following the same steps and procedures of the standard processing. NRT/RRT
delivery of not fully quality assured data can be possible as long as a NF
provides raw data to the DC in NRT/RRT.
Raw data collected at the NF in the original acquisition data format are
transcribed in a homogeneous and agreed netCDF data format to the aerosol
remote sensing processing suite at the ACTRIS DC, being the ARES Level 0 data.
All information needed for the steps forward in the processing chain is
annotated into the file or in a dedicated database (SCC database). It is
recommended that raw data should be centrally stored and it should be under
the responsibility of the NF to keep a local backup.
Level 0 data are centrally progressed at ACTRIS ARES DC level, generating
Level 1 (not-fully QC data) preprocessed signals and optical properties
products. On-the-fly QC procedures guarantees basic quality control on Level 1
data.
Off-line QC procedures are run systematically after the outcomes from related
TC (namely CARS) are available and transferred to the ARES DC unit. The data
originator and the CARS TC units receives feedback of the outcome of the QC.
This feedback mechanism potentially allows to discover and address
instrumental issues, with links to the TC. All the data compliant to all the
QC requirements (both pre-processed and processed data) are made available as
Level 2 data.
ARES DC offers also products resulting from the processing at DC itself of
Level 2 lidar and photometer data collected at the aerosol remote sensing NFs.
Finally, Level 3 climatological products are produced at ARES DC from lidar
Level 2 optical property products.
Product Type Availability (Typical)
Level 1 RRT / NRT
Level 2 1 year
Level 3 1 year
Table 15: ARES Data Products Availability
Figure 9: ARES Data Products Availability
4.2.4 CLU dataflow and data management
Modern cloud remote sensing instruments can produce vast amounts of raw data.
This data first need to be stored locally at the measurement site. Then, the
data are transferred to FMI servers for processing and archiving. Currently,
FMI offers an FTP access point to establish the file transfer, but it is site
operators responsibility to maintain regular data flow to FMI.
It should be noted that, technically, it is also possible to execute the first
processing step already on site, and only transfer the processed measurement
files, that are much smaller, to FMI for further processing. It is currently
unclear if this option will be used in the operational ACTRIS processing or
not.
At FMI, the raw measurement files from various instruments are processed to
obtain more standardized netCDF files with a common metadata structure. In
this stage, we also screen out noisy data points and apply possible
calibration factors. This first processing step is applied to cloud radar and
lidar measurements, but the microwave radiometer (MWR) data are processed
elsewhere. FMI only receives the calibrated and processed Level 2 MWR files
needed in the further processing steps.
After receiving and processing the raw data (and receiving MWR files), we
generate all Level 2 cloud products with our in-house processing suite. All
processed data are stored in netCDF files, which are archived on FMIs data
servers. From the processed files, we generate a metadata database which is
synchronized with the master metadata repository hosted by the ACCESS unit.
All of our metadata is available as a JSON format via restful http API. The
actual metadata standard is yet to be decided, but it must comply with the
netCDF conventions because we use the netCDF file format. All data files
encounter regular back-ups.
A general overview of the links between national facilities, CLU, and the
corresponding topical centre, CCRES, are illustrated in Figure 9\.
Figure 10. CLU data products and services
4.2.5 GRES dataflow and data management
Data provided in GRES unit are L2 and L3 data produced from L0 and L1 data
processing performed at NFs level (see Figure 10). These data have to be
provided by NFs in GEOMS HDF format then converted within GRES DC in NetCDF
format in order to be disseminated through ACTRIS DC, These data have to be
completed with rich metadata. NFs are also in charge of providing tools to
facilitate the generation and the handling of the data. The GRES unit is in
charge of: - creating and maintening the metadata catalogue, - providing a
free and open access to metadata, data and tools developed by NFs through
user-friendly web interfaces, - providing an open access to documents about
description of algorithm retrieval by types of data and description of data
quality assurance and control procedures using in NF's for the data
production, - developing tools to convert L2 and L3 data into NetCDF format
and to ensure the completeness of data provision process, - developing
quicklooks for level 2 and 3 data in order to a more comprehensible
understanding of the data, - offering links to EVDC-ESA Atmospheric Validation
Data Centre), - assuring long-term archiving of L2 and L3 data. Jointly with
NFs and TCs, it also contributes to the elaboration of the data workflow,
Figure 11: GRES data products and services
4.2.6 ASC dataflow and data management
Data provided in ASC unit are L2 and L3 data produced These data are produced
from L0 and L1 data processing performed at NFs level (see Figure 12). These
Datadata have to be provided by NFs in a standard formats and to be completed
with rich metadata (see section 3.5). NFs are also in charge of providing
tools to facilitate the generation and the handling of the data. The ASC unit
is in charge of i) providing a free and open access to data and tools
developed by NFs through user-friendly web interfaces, ii) developing data
visualization tools, iii) developing tools to ensure the quality and the
completeness of the data provision process, iv) creating and maintaining the
metadata catalogue, and finally v) assuring long-term archiving of L2 and L3
data. Jointly with NFs and TCs, it also contributes to the elaboration of the
data workflow.
Figure 12: current overview of ASC unit
## Findable: Making data findable, including provisions for metadata [FAIR
data]
4.3.1 ACTRIS variable names and implementation of vocabulary
Generally, ACTRIS data set names aims to be compliant with CF (Climate and
Forecast) metadata conventions. In the case where no standard CF names are
defined, an application will be sent to establish these. The names used are in
appendix 1. Currently there is no search model used by the ACCESS unit (ACTRIS
Data Centre web interface). Still search keywords are implemented to varying
degrees on the individual data center unit level (e.g. search keywords are
used for the EBAS ISO19115 records). The ACTRIS data center will in the future
use a controlled set of vocabularies for search keywords like Global Change
Master Directory (GCMD) or similar, and semantic search will be implemented to
facilitate use of ACTRIS variable across scientific disciplines and domains.
ASC unit has developed a user-friendly web interface which includes searching
tools based on the metadata catalogue for the three pillars, DASCS, LAR and
LADP. Relevant searching criteria have been defined for each pillar.
Standard vocabulary might not always be used, but in all cases they should be
mapped to standard vocabulary if existing by the DC ACCESS unit.
Data center unit Vocabulary name Comment
<table>
<tr>
<th>
In Situ
</th>
<th>
IUPAC, CF-1.7
</th> </tr>
<tr>
<td>
ARES
</td>
<td>
CF-1.7
</td> </tr>
<tr>
<td>
CLU
</td>
<td>
CF-1.7
</td> </tr>
<tr>
<td>
ACCESS
</td>
<td>
Defined by primary repository
</td> </tr>
<tr>
<td>
ASC
</td>
<td>
CF-1.7
</td> </tr>
<tr>
<td>
GRES
</td>
<td>
CF-1.7
</td> </tr> </table>
Table 16: List of vocabularies
4.3.2 Metadata standards and meta data services
ACTRIS will harvest metadata from a large range of observations employing
methodologies provided by multiple data centre units covering different types
of data both in terms of size, time coverage and metadata. The ACCESS unit
aims at providing discovery metadata in a common format for all ACTRIS level 2
data, using a common standard that is WIS compliant such as ISO19139 or
ISO19115. A decision about the standard is not taken, and under consideration.
In any case, exceptions may occur in cases where the selected metadata
standards do not meet the need to describe the data. The present situation is
shown in Table 18\.
Future efforts will further develop the system shown in Figure 7 and make it
possible for the ACCESS unit to harvest all metadata from the different data
centre units and collect this in a central ACTRIS metadata catalog and provide
this through a commonly used protocol for Metadata harvesting like OAI-PMH or
similar. A decision about the standard is not taken, and under consideration.
The present situation is shown in Table 18. ACTRIS data should be described
with rich metadata. Currently metadata services are offered on data centre
unit level, but the aim is to offer all ACTRIS level 2 data through a
centralized metadata service.
There might be instances where standards do not cover the need for describing
the data at the data centre unit. In this case, ACTRIS Data Centre will still
try to provide metadata in a way that is similar to the agreed formats and
standards and at the same time push for an extension of the specified
standard.
ACTRIS aiming at following the INSPIRE directive for metadata formatting.
Present standard(s) for metadata is at the ACCESS unit level. A decision is
needed if data centre units should provide metadata according to specific
standards, as well as providing metadata from the ACTRIS DC to the ENVRI
cluster, EOSC etc.
Tables below show the status by July 2019.
<table>
<tr>
<th>
Data centr e unit
</th>
<th>
metadat
a service
</th>
<th>
end-point
</th>
<th>
standar d
</th> </tr>
<tr>
<td>
In Situ
</td>
<td>
OAI-PMH
</td>
<td>
https://ebas-oaipmh.nilu.no/oai/provider?verb=ListIdentifiers&metadataPr
efix=iso19115
</td>
<td>
ISO
191152, CF1.7,ACD
D
</td> </tr>
<tr>
<td>
ARES
</td>
<td>
ISO via Thredds server, JSON via REST API, HTTP via
Apache
Server
</td>
<td>
https://login.earlinet.org:8443/thredds/catalog/earlinedbs can/catalog.html ,
https://data.earlinet.org/api/services/ , https://data.earlinet.org/
</td>
<td>
ISO
19115-2
,
ECMA2
62-3,
CF-1.7,
NCML,
RFC261
6
</td> </tr>
<tr>
<td>
CLU
</td>
<td>
JSON via REST API
</td>
<td>
http://devcloudnet.fmi.fi/api/
</td>
<td>
To be decided
</td> </tr>
<tr>
<td>
ACCE
SS
</td>
<td>
To be
decided
</td>
<td>
None
</td>
<td>
To be decided
</td> </tr>
<tr>
<td>
ASC
</td>
<td>
CSW, geonetw
ork
</td>
<td>
http://catalogue2.sedoo.fr/geonetwork/srv (implementation on going)
</td>
<td>
ISO
19139
</td> </tr>
<tr>
<td>
GRES
</td>
<td>
CSW, geonetw
ork
</td>
<td>
http://catalogue2.sedoo.fr/geonetwork/srv (implementation on going)
</td>
<td>
ISO
19139
</td> </tr> </table>
Table 17: List of metadata standards and services implemented by July 2019
ACTRIS metadata should be registered or indexed in relevant metadata
catalogues
Metadata catalogs Description ACTRIS DC unit indexed
<table>
<tr>
<th>
GISC Offenbach
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
NextGEOSS
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
WIGOS
</td>
<td>
NaN
</td>
<td>
None
</td> </tr>
<tr>
<td>
Copernicus
</td>
<td>
Defined by primary repository
</td>
<td>
None
</td> </tr>
<tr>
<td>
re3data
</td>
<td>
To be defined
</td>
<td>
None
</td> </tr>
<tr>
<td>
EOSC
</td>
<td>
To be decided
</td>
<td>
None
</td> </tr> </table>
Table 18: ACTRIS metadata registered or indexed in relevant metadata catalogs.
4.3.3 Traceability of ACTRIS data
The term measurement traceability is used to refer to an unbroken chain of
comparisons relating an instrument's measurements to a known standard, time,
processing, software etc. Calibration to a traceable standard can be used to
determine an instrument's bias, precision, and accuracy. The ability to trace
a measurements back to its origin is important for several reasons; It
increase the quality by facilitating back-out or reprocess bad data, and
conversely, it allows reward and boost good data sources and processing
techniques. This is also to ensure that proper attribution is given to data
originators adequately reflecting their contributions through the data
production chain.
ACTRIS works towards establishing traceability for all variables using
persistent identifiers (PIDs). This work is in development, and need close
interaction with the topical centres as well as National Facilities. Currently
ACTRIS is using digital object identifiers (DOIs) for some level 3 datasets
though the Data Cite Metadata Store API, and more will be implemented.
Currently, ARES unit assigns two different types of local persistent
identifier (PID): * Data Processing PIDs. These PIDs identify unequivocally
the characteristics of the instrument (including all its subparts) used to
collect the Level 0 data. In particular, to each submitted Level 0 product it
is assigned an alphanumeric ID which allows to retrieve all the details about
the instrument configuration used to perform the measurement as well as the
data processing configuration used to compute the corresponding Level 1 data
products.
* Dataset PIDs. An internal PID generation system based on an alphanumerical "prefix"/"suffix" pattern identifies univocally each dataset downloaded through the ARES interfaces.
ACTRIS data will be assigned PIDs that are available through the metadata, the
table below show the status by July 2019.
<table>
<tr>
<th>
Data centre
</th>
<th>
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
unit
</td>
<td>
PID service
</td>
<td>
Description
</td>
<td>
Standard
</td> </tr>
<tr>
<td>
In Situ
</td>
<td>
To be decided
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
ARES
</td>
<td>
Internal
</td>
<td>
Internal generation system of alphanumerical PIDs for data processing,
Internal generation system of alphanumerical PIDs based on Handle System's
pattern for datasets
</td>
<td>
RFCs 3650,
RFCs 3651,
RFCs 3652
</td> </tr>
<tr>
<td>
CLU
</td>
<td>
To be decided
</td>
<td>
None
</td>
<td>
To be decided
</td> </tr>
<tr>
<td>
ACCESS
</td>
<td>
Defined by primary repository
</td>
<td>
None
</td>
<td>
To be decided
</td> </tr>
<tr>
<td>
ASC
</td>
<td>
To be decided
</td>
<td>
None
</td>
<td>
To be decided
</td> </tr>
<tr>
<td>
GRES
</td>
<td>
To be decided
</td>
<td>
None
</td>
<td>
To be decided
</td> </tr> </table>
Table 19: ACTRIS PID handlers
4.3.4: Version control of ACTRIS (meta)data
The ACTRIS DC aims at providing clear versioning of its data and metadata. Due
to the decentralized nature of the Data Centre, this varies between the
different data centre units, and implementation will be done on unit level.
As a guiding principle, all data submitted to ACTRIS passing quality assurance
should be uniquely identified. In case of updates, a ID-number is generated,
and previous data versions should be identifiable and kept available upon
request while the latest version is served through the ACTRIS Data Centre.
A Versioning System has been implemented at ARES directly in the RDBMS by
using DML (Data Manipulation Language) triggers. A new version of a file is
produced when a user tries to modify data through a DML event. New versions
will be centrally produced if new QC procedures and new processing features
are released. Additionally new versions of the files will be allowed and
centrally handled for fixing file bugs in particular for legacy data.
## Accessible: Making data openly accessible [FAIR data]
The purpose of the data collection and generation of data products in ACTRIS
is to provide open access to aerosol, cloud and trace gas in situ and remote
sensing measurements of high quality (see section 1).
A guiding principle is that all ACTRIS data should be readable for both humans
and machines using protocols that offer no limitations to access. ACTRIS Data
Centre is organized as a distributed network of centralized repositories (see
Figure 6). The ACTRIS data will be offered through the ACTRIS Data Centre
portal, a web portal that allows the user to search, analyses and download
data produced within ACTRIS (see Figure 7). Access to data and metadata will
also be made possible by machine-to-machine interaction, enabling harvesting
of metadata from the ACTRIS metadata catalog. Currently, machine-tomachine
access of ACTRIS data varies between the different data units and their
associated repositories.
There might also be data available through the ACTRIS Data Centre that is not
directly ACTRIS data, but used in the production and interpretation of ACTRIS
data.
4.4.1 ACTRIS data access and access protocols
General guidelines for access to ACTRIS data and services are available in the
current ACTRIS access and service policy. Conditions of use should be
indicated in section 3.4, and is covered by the attached license, unless
stated otherwise.
The access protocol will be clearly described in the metadata. If direct
access is limited due to size of data or sensitive data, contact information
on institutional and/or personal level will be included. The data format and
access protocol must be available as machine readable metadata.
Currently all data centre units maintain the access to the data, either
directly through the unit specific repository or through the ACTRIS data
portal.
The table shows the data access protocols.
Authentic ation and
authoriza
DC Proto tion
unit data format Repository URL col needed
<table>
<tr>
<th>
In Situ
</th>
<th>
netCDF, NasaAmes, CSV, XML
</th>
<th>
http://ebas.nilu.no/
</th>
<th>
HTTP
</th>
<th>
No
</th> </tr>
<tr>
<td>
ARES
</td>
<td>
netCDF
</td>
<td>
http://data.earlinet.org/
</td>
<td>
HTTP
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
CLU
</td>
<td>
netCDF
</td>
<td>
http://cloudnet.fmi.fi
</td>
<td>
HTTP
</td>
<td>
No
</td> </tr>
<tr>
<td>
ACC
ESS (dat a
port
al)
</td>
<td>
Defined by primary repository
</td>
<td>
http://actris.nilu.no/
</td>
<td>
HTTP
</td>
<td>
For some data
</td> </tr>
<tr>
<td>
ASC
</td>
<td>
netCDF (data conversion by 2020)
</td>
<td>
https://data.eurochamp.org/
</td>
<td>
HTTP
</td>
<td>
For some data
</td> </tr>
<tr>
<td>
GRE
S
</td>
<td>
netCDF (data
conversion by 2021)
</td>
<td>
https://gres.aeris-data.fr
</td>
<td>
FTP
</td>
<td>
No
</td> </tr>
<tr>
<td>
ACC
ESS
</td>
<td>
varies
</td>
<td>
http://actris.nilu.no/Content/?pagei
d=226809f7a0ac49538914eeafb4448 afa
</td>
<td>
FTP
</td>
<td>
No
</td> </tr> </table>
Table 20: Data formats and access protocols
For In-Situ, CLU, GRES and and ASC unit, all data, metadata, tools and
documentation are provided with free and fully open access to all users
without authentication with username and password.
A Sign-On authentication system has been implemented at ARES unit. It is based
on CAS (Central Authentication Service) project which implements natively
multiple authentication protocols (CAS, SAML, OAuth, OpenID) and provides both
authentication via username and password and via Google credentials. In order
to gain access to ARES products (apart from Quicklooks, simple plots of Level
1 data) a user authentication (free and open to all users) is needed. Such
authentication process has been implemented with the only purpose to allow
feedback to the end user in case of software or data products updates.
In general, for all data that requires username and password, a Single-Sign-On
service will be implemented, and used by all Data Centre units.
In all cases where access is restricted, information on how to access the data
should be available through the metadata, in order to facilitate machine to
machine interaction.
If specific software tools are need to access the data, documentation about
the software and how to access it should be included, preferably in the
metadata. Furthermore, ACTRIS digital tools (software etc.) will be available
through open access repositories like GitHub. A open source licence for
software should be encouraged and applied when possible. All software related
to ACTRIS data should aim at following the practice of open access if
possible. For software related to access of level 2 data, the ACCESS unit is
responsible together with the data centre units. To be discussed: For level 0
and 1, the topical centres and/or data centre unit are responsible for
providing access to software related to ACTRIS level 0 and level 1.
There are valuable and contributing networks to ACTRIS e.g. EMEP, GAW,
EARLINET, and level 3 products covering bridge to external data bases and use
of these data in combined products. The implementation and strategic and
technical contributions of this is under development.
## Interoperable: Making data interoperable [FAIR data]
As a guiding principle, ACTRIS should make sure that metadata and data use a
formal, accessible, shared and broadly applicable language for knowledge
representation in order to facilitate interoperability. Still, work remains to
see if a common solution could be agreed upon. The intricate nature of the
data and metadata might require the use of different solutions to suit the
needs of different data centre units. As mention in section 4.3 metadata
standard and vocabularies commonly used in the atmospheric domain should be
applied, unless the common solutions do not address the specific need for the
DC unit. Implementation of new standards for data and metadata used in the
context of ACTRIS should be discussed by all the DC units. The aim should be
to harmonize data and metadata as much as possible, both in terms of technical
aspects related to implementation, but also making it easier for the end user
to make use of the data.
By many of the DC units the Thredds Data Server (TDS) is used for serving data
and metadata in an automated way as netCDF files through the OPeNDAP protocol
(this apporach is implemented by In-Situ, ARES, ASC, GRES).
In addition to this, ARES provides a REST API for machine-to-machine
interaction. The API serves metadata (info, provenance, versions, quality
controls, etc.) in JSON format and data (specific files or datasets previously
generated) in NetCDF format.
CLU is currently working on a RESTful API with similar services as ARES in
development.
## Reuseable: Increase data re-use [FAIR data]
The guiding principle is free and open access to ACTRIS data and ACTRIS data
products, and the ACTRIS DC will facilitate data re-use by providing free and
open access to ACTRIS data following the ACTRIS access and service policy and
the open research data initiative of the European Commission.
As a result, the ACTRIS DC will implement one or multiple licenses for all
ACTRIS level 2 data and NRT data that is available through the ACTRIS metadata
catalog. Furthermore, the ACTRIS DC might also consider issuing a license on
the use of metadata, in order to acknowledge ACTRIS when large amounts of
metadata is harvested by third party application/services. ACTRIS aims to
implement a license from the time ACTRIS becomes an ERIC (probably end of 2020
or early 2021). Until ACTRIS has decided upon and implemented one or more
licenses, the current ACTRIS data policy will apply.
Several features have been implemented by In-Situ, ARES, CLU (or more) units
to ensure reusability and traceability, in particular traceable data flow and
version control of data products, see section 4.3.
In order to increase the reusability of data in ASC unit, these data are
completed with rich metadata which are in open access from the website. These
metadata provide detailed technical description of the chambers (size, volume,
walls, irradiation system ...), experimental protocols used for the generation
of the data, and an “auxiliary mechanism” which provides the chamber-dependent
parameters affecting the observations. This last one is very useful for
modelers who aim at simulating experiments performed in simulation chambers.
As regards ARES unit, all the characteristics of the LIDAR instrument
configuration (laser, telescope, acquisition and detection system, processing
configuration, ...) are reported as metadata in each ARES data product.
Availability of data can vary between the different data centre units. As an
example, in situ data is typically submitted on an annual basis, and are
therefore available the subsequent year, but other data centre units may
provide NRT delivery of data; in addition, there may be campaign-based data.
ACTRIS legacy data should be kept available for users, but may have a
different data policy to the current ACTRIS data policy. If this is the case,
this information should be available in the metadata.
Data centre unit Data licence Comment
<table>
<tr>
<th>
In Situ
</th>
<th>
To be decided
</th> </tr>
<tr>
<td>
ARES
</td>
<td>
To be decided
</td> </tr>
<tr>
<td>
CLU
</td>
<td>
To be decided
</td> </tr>
<tr>
<td>
ACCESS
</td>
<td>
Will be defined by primary repository
</td> </tr>
<tr>
<td>
ASC
</td>
<td>
To be decided
</td> </tr>
<tr>
<td>
GRES
</td>
<td>
To be decided
</td> </tr> </table>
Table 21: Data licences
Responsible data centre unit Software licence Software link
<table>
<tr>
<th>
In Situ
</th>
<th>
None
</th>
<th>
EBAS IO
</th> </tr>
<tr>
<td>
ARES
</td>
<td>
None
</td>
<td>
Single Calculus Chain
</td> </tr>
<tr>
<td>
CLU
</td>
<td>
MIT
</td>
<td>
CloudnetPy
</td> </tr>
<tr>
<td>
ASC
</td>
<td>
None
</td>
<td>
None
</td> </tr>
<tr>
<td>
GRES
</td>
<td>
None
</td>
<td>
None
</td> </tr> </table>
Table 22: Software licences
# Allocation of resources
ACTRIS Data Center is a distributed data center with scientific and data
expert contributions as well as funding contributions from many institutions
and sources. All host countries are contributing significantly to the
operation and implementation through both national and international projects,
in addition to considerable support from the institutions involved.
Furthermore, there is large ongoing activity of making ACTRIS data FAIR, in
particular this is the core of the work within the H2020 project ENVRI-FAIR.
The ACTRIS DC budget in ENVRI-FAIR is ca 890 kEuro which leaves this project
as one if the main funder of making ACTRIS data FAIR.
Details on costs of the various units is available upon request, and a part of
the work within ACTRIS-PPP and ACTRIS-IMP (starting 1 January 2020).
# Data security
The main structure and installations of the ACTRIS Data Centre is located at
NILU - Norwegian Institute for Air Research, Kjeller, Norway. NILU hosts EBAS
archiving all in situ data sets, in addition to the ACTRIS Data Portal. The
other installations are the EARLINET DB at National Research Council -
Institute of Environmental Analysis (CNR), Tito Scalo, Potenza, Italy, the
satellite data components at the University of Lille, Villeneuve d'Ascq,
France, and the cloud profile data in the Cloudnet DB at the Finnish
Meteorological Institute in Helsinki, Finland.
## Archiving and preservation of In-Situ data
EBAS is a relational database (Sybase) developed in the mid-1990s. Data from
primary projects and programmes, such as ACTRIS, GAW-WDCA, EMEP, AMAP, are
physically stored in EBAS. All data in EBAS are, in addition, stored at a
dedicated disk in the file tree at NILU. This include the levels 0-1-2 of
data.
The complete data system is backed up regularly. This includes incremental
back up of the data base 4 times per week, and two weekly back ups of the full
data base to a server in a neighbour building to ensure as complete as
possible storage of all data for future use in case of e.g. fires or other
damages to the physical construction. File submission is conducted by a web
application which checks files for syntactic and semantic validity before
uploading. As an alternative submission method, especially for regular
submission or submission of many files at once, ftp upload is possible.
A dedicated ftp area is allocated to incoming files, and all activities herein
are logged on a separate log file, and backed up on 2 hour frequency.
Ca 385 separate new comprehensive files including meta data with annual time
series of medium to high time resolution (seconds to week) is expected per
year. A significant growth in this number is not expected on annual scale. For
more detail, see Table 2 and Table 3.
EBAS is based on data management over more than 40 years. Last 10 years there
has been a European project-type cooperation from FP5 to Horizon2020, with and
EMEP and GAW programmes since 1970’s as the fundament. Sharing visions and
goals with the supporting long-term policy driven frameworks have ensured
long-term funding for the core data base infrastructure. A long-term strategy
for providing access to all ACTRIS data and other related services are in
progress through the establishment of ACTRIS as a RI. ACTRIS is on the ESFRI
(European Strategy Forum on Research Infrastructures) roadmap for Research
Infrastructures, and a preparatory phase project is ongoing.
## Archiving and preservation of ARES data
The ARES infrastructure is composed by seven virtual servers and two different
SAN (Storage Area Network).
One server hosts a PostgreSQL database, a second and a third one are used to
interface the data originators and endusers respectively. ARES data products
are safely stored on a primary SAN. A full daily back up is made automatically
and it is stored on a second backup SAN.
Another server is responsible for the provisioning of the whole database
through THREDDS (Thematic Real-time Environmental Distributed Data Services).
On the same server a CAS (Central Authentication Service) is configured to
authenticate all ARES users centrally.
The current size of the PostgresSQL EARLINET database is about 1GB. The total
amount of data submitted
(NetCDF EARLINET files) is about 1.3 GB. An estimation of the growing rate of
the database is 100200MB/year. However a significant growth in number of files
to be collected is expected because of: the use of the SCC (Single Calculus
Chain) for the data submission, the inclusion of new products (preprocessed
data, NRT optical properties, profiles, aerosol layers properties and
multiwavelength datasets), increases of the number of aerosol remote sensing
NF and increase of NF operating 24/7. We estimate that during the
Implementation Phase the ACTRIS aerosol profile database could grow at a rate
of about 300 GB per year.
The SCC is part of the ARES infrastructure and it is the standard EARLINET
tool for the automatic analysis of lidar data. Three additional servers are
needed to provide this further service: a calculus server where all the SCC
calculus modules are installed and ran, a MySQL database where all the
analysis metadata are stored in a fully traceable way and finally a web
interface allowing the users to access to the SCC.
The ARES infrastructure is maintained by the National Research Council of
Italy with long term commitment for archiving and preservation. The archiving
on CERA database is a further measure for assuring the availability of the
data through redundancy of the archive.
## Archiving and preservation of CLU data
The CLU database consists of a file archive connected to a relational metadata
database, due to the nature of the typical use-case and data volume. The
infrastructure comprises a webserver, an FTP server for incoming data streams,
web and rsync server for outgoing data streams, processing servers, with data
storage distributed across a series of virtual filesystems including
incremental backups. Due to the data volume, most sites also hold an archive
of their own Level 0 and Level 1 data, effectively acting as a second
distributed database and additional backup.
The current size of the database is about 25 TB and the volume is expected to
grow by close to 0.5 TB per year with the current set of stations and the
standard products. However, there will be a significant increase in volume
when the planned move to multi-peak and spectral products is undertaken; this
is in addition to a slight increase arising through the creation of new
products. The CLU infrastructure is maintained by FMI with long-term
commitment for archiving and preservation. Publication of QA datasets will aid
dataset preservation.
## Archiving and preservation of GRES data
For the GRES unit, data are stored on disk on a server in Paris. As new data
are provided once a year, a full backup is made yearly and stored on tape. We
plan to have soon a second copy on tape in Palaiseau, France. The distance
between both sites will be about 20km.
The GRES infrastructure is maintained by AERIS with long-term commitment for
archiving and preservation.
## Archiving and preservation of ASC data
Since the Eurochamp H2020 project, data from simulation chambers are managed
by AERIS. It consists of a file archive connected to a MondoDB metadata
database. Data files are stored on disk on a server located in Toulouse,
France. A full daily backup is made automatically and stored on another site
(in Tarbes, France). The distance between the database and the backup site is
about 120km. We plan to have soon a copy on tape in Paris.
The ASC infrastructure is maintained by AERIS with long-term commitment for
archiving and preservation.
## Archiving and preservation of ACCESS data
The ACCESS unit is providing access to ACTRIS data through the ACTRIS data
portal using the ASP.NET (Web Forms) Framework 4.5 and Internet Information
Services (ISS) web-server. The metadata is harvested from each individual data
center unit, currently In Situ (EBAS), ARES (EARLINET), CLU (CLOUDNET) and
GRES (NDACC), using custom harvesting routines triggered by cronjobs on an
Ubuntu server running custom scripts written in Perl/Python. The metadata
itself is stored on a Oracle database server, version 11.2.0.4. Versioning and
revision control is managed using subversion (SVN).
# Ethical aspects
ACTRIS Ethical Guidelines describes the main principles of ethics to be
applied within ACTRIS activities. These guidlines shall be acknowledged and
followed by all persons affiliated to ACTRIS and should be supported by all
participating institutions, including the Data Centre. These guidelines do not
exclude other ethical issues (e.g. related to professional and scientific
responsibility, governance, social and environmental responsibility and law
abiding) brought up by the ACTRIS ERIC and its contractual ACTRIS partners, or
by the Ethical Advisory Board of the ACTRIS ERIC. In general, everyone in
ACTRIS should work in a socially ethical way keeping the integrity and
fairness, and maintaining high level of trust and respect among the people
working in ACTRIS and with the users and other stakeholders. One should always
take into account that the mission of ACTRIS is to provide effective access
for a wide user community to its resources and services, in order to
facilitate high-quality Earth system research, to increase the excellence in
Earth system research, and to provide information and knowledge on developing
sustainable solutions to societal challenges.
# Appendix
# Appendix 1: List of ACTRIS variables from observational platforms and
associated recommended methodology
List of ACTRIS variables and recommended methodology
Additional information: During ACTRIS-2, e.g. the aerosol and cloud databases
will be augmented with new classification products developed through the
combination of existing sensors with additional instrumentation; and products
providing information about aerosol layering and typing, together with
advanced products derived from long term series or special case analyses. In
addition, new parameters utilizing these products will also be prepared, and
standardized pre processed lidar data and NRT optical property profiles will
be available.
# Appendix 2: List of ACTRIS level 3 data products
List of ACTRIS level 3 data products
# Appendix 3: ACTRIS In situ data centre unit (In-Situ) data life cycle
A3.1 Data Life Cycle Description
More tables to be added regarding the workflow, currently this is an example
draft.
ACTRIS In situ data centre unit workflow diagram
Figure 13: ACTRIS In Situ DC unit data workflow, describing the interaction
between NFs, TCs, and DC In Situ in data production.
ACTRIS In situ data review workflow
Figure 14: ACTRIS In Situ DC unit data review workflow, a sub-workflow to the
In Situ main data production workflow.
A3.2 Workflow Implementation Tables, by instrument type
In this version of the DMP, this Annex focuses on the distribution of
responsibilities for workflow processing tasks, and a short specification of
these. A specification of metadata and data items contained in data products
and pre-products will follow in a later version.
For each workflow task, responsibilities include the following roles: *
Specification: defining what is done in the tas. Includes step-by-step
description, with formulas (algorithm description document (ADD), also called
SOP, to be provided later). * Implementation: taking the ADD, and turning it
into software. * Operation: running the software on a daily basis. Includes
documentation of provenance while executing software. * Application: applying
the software. Usually automatic, needs to be specified for manual tasks
involving humans.
The task specifications and the distribution of roles between NFs, TCs, and DC
are stated in tables linked below.
A3.2.1 Aerosol observations
A3.2.1.1 Integrating nephelometer
Nephelometer workflow implementation tables
A3.2.1.2 Filter Absorption Photometer
Filter Absorption Photometer workflow implementation tables
A3.2.1.3 Mobility Particle Size Spectrometer
Mobility Particle Size Spectrometer workflow implementation tables
A3.2.1.4 Condensation Particle Counter
Condensation Particle Counter workflow implementation tables
A3.2.1.5 Cloud Condensation Nucleus Counter
Cloud Condensation Nucleus Counter workflow implementation tables
A3.2.1.6 Aerodynamic / Optical Particle Size Spectrometer
Aerodynamic / Optical Particle Size Spectrometer workflow implementation
tables
A3.2.1.7 Aerosol Chemical Speciation Monitor
Aerosol Chemical Speciation Monitor workflow implementation tables
A3.2.1.8 Proton-induced X-ray Emission
Proton-induced X-ray Emission workflow implementation tables
A3.2.1.9 Organic Tracers
Organic Tracers workflow implementation tables
A3.2.1.10 Organic Carbon / Elemental Carbon
Organic Carbon / Elemental Carbon workflow implementation tables
A3.2.1.11 Scanning Particle Size Magnifier / (Neutral) Air Ion Spectrometer /
Nano Mobility Particle Size Spectrometer
Scanning Particle Size Magnifier / (Neutral) Air Ion Spectrometer / Nano
Mobility Particle Size
Spectrometer workflow implementation tables
A3.2.1.12 Particle Size Magnifier
Particle Size Magnifier workflow implementation tables
A3.2.2 Cloud observations
A3.2.2.1 Integrating Cloud Probe
Integrating Cloud Probe workflow implementation tables
A3.2.2.2 Ice Nucleus Counter
Ice Nucleus Counter workflow implementation tables
A3.2.2.3 Cloud Imaging Probe
Cloud Imaging Probe workflow implementation tables
A3.2.2.4 Cloud Droplet Probe
Cloud Droplet Probe workflow implementation tables
A3.2.2.5 Cloud Water Collector
Cloud Water Collector workflow implementation tables
A3.2.2.6 Cloud Aerosol Particle Sampler
Cloud Aerosol Particle Sampler workflow implementation tables
A3.2.3 Trace Gas Observations
A3.2.3.1 Volatile Organic Compounds
Volatile Organic Compounds workflow implementation tables
A3.2.3.2 Nitrogen Oxides
Nitrogen Oxides workflow implementation tables
A3.2.3.3 Condensable Vapours
Condensable Vapours workflow implementation tables
A3.2.3.4 Ozone
Filter Absorption Photometer workflow implementation tables to be added.
A3.2.3.5 Meteorological Base Parameters
Meteorological Base Parameters workflow implementation tables to be added.
# Appendix 4: ACTRIS Aerosol remote sensing data centre unit (ARES) data life
cycle and workflow diagram
Link to seperate document describing the workflow in more detail.
To be added by ARES
67
# Appendix 5: ACTRIS Cloud remote sensing data centre unit (CLU) data life
cycle and workflow diagram
ACTRIS Cloud remote sensing data centre unit workflow diagram
# Appendix 6: ACTRIS trace gases remote sensing data centre unit (GRES) data
life cycle and workflow diagram
ftir data
ACTRIS trace gases remote sensing data centre unit workflow diagram (ftir
data)
69
lidar data
ACTRIS trace gases remote sensing data centre unit workflow diagram (lidar
data)
uvvis data
ACTRIS trace gases remote sensing data centre unit workflow diagram (uvvis
data)
71
# Appendix 7: ACTRIS Atmospheric simulation chamber data centre unit (ASC)
data life cycle and workflow diagram
ACTRIS Atmospheric simulation chamber data centre unit workflow diagram
# Appendix 8: Data lifecycle and workflow for ACCESS Data Centre Unit
ACTRIS ACCESS data centre unit workflow diagram
# Appendix 9: Format and external data sources for level 3 variables
Below is a list of all lev3 variables that are listed in Annex II and the
checkbox indicates whether they are included in the lists below or not:
* [ ] Column Water Vapor Content
* [ ] Climatology products for ACTRIS variables @ ACTRIS National Facilities across Europe
* [x] Collocation service of data from contributing networks
* [ ] PM retrieval @GAW sites
* [x] Single Scattering Albedo @ACTRIS National Facilities
* [ ] Integrated full-range particle number size distribution
* [ ] Source apportionment of submicron organic aerosols in Europe
* [ ] Volatile Organic Compounds (VOC) source attribution across Europe
* [ ] Cloud occurence at cloud in situ observational platforms
* [x] Direct Sun/Moon Extinction Aerosol Optical Depth (column)
* [x] Spectral Downward Sky Radiances
* [x] Aerosol columnar properties (GRASP-AOD)
* [x] ReOBS
* [x] Satellite data – combined with ground based ACTRIS data
* [x] Aerosol and Gas trend assessment
* [x] Data Interpretation and Outlier Identification Tool
* [x] Optimal interpolation and Gap filling tool
* [x] Model Evaluation Service
* [x] NWP Model Evaluation Service
* [x] Transport modelling products for assessment of source regions
* [x] Alert Service for National Facilities
Collected (other than ACTRIS L0-1-2) form sourc
Product at e description
<table>
<tr>
<th>
AERONETNASA L1
</th>
<th>
csv
</th>
<th>
NASA/ GSFC
</th>
<th>
https://aeronet.gsfc.nasa.gov
</th> </tr>
<tr>
<td>
Terra+Aqu a/MODIS
</td>
<td>
HDF
4
</td>
<td>
AERIS
</td>
<td>
https://modis.gsfc.nasa.gov
</td> </tr>
<tr>
<td>
CALIPSO
</td>
<td>
HDF
4
</td>
<td>
AERIS
</td>
<td>
https://www-calipso.larc.nasa.gov
</td> </tr>
<tr>
<td>
CLOUDSAT
</td>
<td>
HDF
4
</td>
<td>
AERIS
</td>
<td>
http://www.cloudsat.cira.colostate.edu
</td> </tr>
<tr>
<td>
PARASOL
</td>
<td>
HDF
5
</td>
<td>
AERIS
</td>
<td>
http://www.icare.univ-lille1.fr/parasol
</td> </tr>
<tr>
<td>
Aura/OMI
</td>
<td>
HDF
4
</td>
<td>
AERIS
</td>
<td>
https://aura.gsfc.nasa.gov/omi
</td> </tr>
<tr>
<td>
Terra/MIS
R
</td>
<td>
HDF
4
</td>
<td>
AERIS
</td>
<td>
https://terra.nasa.gov/about/terra-instruments/misr
</td> </tr>
<tr>
<td>
Metop/IAS
I
</td>
<td>
BUF
R
</td>
<td>
AERIS
</td>
<td>
https://www.eumetsat.int/website/home/Satellites/CurrentS
atellites/Metop/MetopDesign/IASI/index.html
</td> </tr>
<tr>
<td>
MSG/SEVI
RI
</td>
<td>
NetC DF4
</td>
<td>
AERIS
</td>
<td>
https://www.eumetsat.int/website/home/Satellites/CurrentS
atellites/Meteosat/index.html
</td> </tr>
<tr>
<td>
AeroCom
</td>
<td>
NetC DF4
</td>
<td>
METN
O
</td>
<td>
https://aerocom.met.no/
</td> </tr>
<tr>
<td>
NWP Model data
</td>
<td>
NetC DF4
</td>
<td>
NWP
Centr es
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
Generated (systematic production)
Product format description
<table>
<tr>
<th>
GRASP-AOD
</th>
<th>
NetCDFCF
</th>
<th>
Aerosol size distribution retrieval from optical depth
</th> </tr>
<tr>
<td>
ReOBS
</td>
<td>
NetCDFCF
</td>
<td>
The ReOBS project proposes an advanced method to aggregate, quality-control
and harmonize in one single NetCDF file as many available geophysical
variables from a NF at hourly scale for the whole data record spanned by this
ensemble of variables. This file allows to easily perform multiannual and
multi-variable studies combining atmospheric dynamics and thermodynamics,
radiation, clouds and aerosols, from ground-based observations associated to a
NF.
</td> </tr>
<tr>
<td>
Aerosol and Gas trend assessment
</td>
<td>
NetCDF-
CF
</td>
<td>
Estimate of long term trends @ACTRIS sites, combining observations with
models, interactive web visualization, automated assessment report
</td> </tr>
<tr>
<td>
Data
Interpretation and
Outlier
Identification Tool
</td>
<td>
NetCDF-
CF
</td>
<td>
Quicklooks for time series data, compared to Copernicus Analysis and
Reanalysis model products
</td> </tr>
<tr>
<td>
? Optimal interpolation and Gap filling tool
</td>
<td>
NetCDF-
CF
</td>
<td>
modal/data integration products which fill measurement gaps, eg in a time
series, profile or field.
</td> </tr>
<tr>
<td>
Alert Service for
National Facilities
</td>
<td>
geoJSON
?
</td>
<td>
Provide near real time update of special weather situations of interest for
research activities at national facilities
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
Generated (on-demand services)
Some products will be generated through on-line services, and will generate
datasets available for a limited time on a web server.
Product format description
<table>
<tr>
<th>
Satellite data subsets
</th>
<th>
NetCDFCF
</th>
<th>
Satellite data subsets, spatially and temporally colocated with ACTRIS ground-
based measurements
</th> </tr>
<tr>
<td>
Transport modelling products for assessment of source regions
</td>
<td>
NetCDF-
CF
</td>
<td>
Backward transport modelling with FLEXPART to analyse air transirt and impact
of various soucres. Develop tools to run FLEXPART operationally and
automatically on a regular basis, e.g. monthly, for every site
</td> </tr>
<tr>
<td>
Colocation service of data from contributing networks
</td>
<td>
NetCDF-
CF
</td>
<td>
Benchmark data products including relevant EMEP and ACTRIS data: PM and/or
sulphate with ACTRIS National
Facilities compiled in one data product
</td> </tr>
<tr>
<td>
Model Evaluation
Service
</td>
<td>
NetCDF-
CF
</td>
<td>
Automated model evaluation workflow, Evaluation reports of different
complexity, NRT and reanalysis, climate models
</td> </tr>
<tr>
<td>
NWP Model
Evaluation Service
</td>
<td>
NetCDF-
CF
</td>
<td>
Automated model evaluation workflow, evaluation reports of different
complexity for NWP models, NRT and reanalysis, NWP models
</td> </tr> </table>
Production of level 3 data solely based on data from ACTRIS observational
platforms
List of ACTRIS level 3 data products
Production of ACTRIS level 3 data and tools through multi-source data
integration services, employing external ground based measurement data
List of ACTRIS level 3 data products
Production of ACTRIS level 3 data products involving regional and global model
data
List of ACTRIS level 3 data products
# Appendix 10: ReOBS workflow diagram
ReOBS workflow diagram
# Appendix 11: Satellite data subsets workflow diagram
Satellite data subsets workflow
ACTRIS PPP
WP4 / Deliverable 4.2
ACTRIS PPP ( _www.actris.eu_ ) is supported by the European Commission under
the Horizon 2020 – Research and Innovation Framework Programme,
H2020-INFRADEV-2016-2017, Grant Agreement number: 739530
Page 1
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1256_TITANIUM_740558.md
|
Executive Summary iii
Introduction 1
1 Data Summary 1
1.1 Data types and origins 2
1.2 List of partners who will act as either Data Controllers or Data
Processors 3
2 FAIR Data 4
2.1 Making data findable, including provisions for metadata 4
2.2 Making data openly accessible 5
2.3 Making data interoperable 5
2.4 Increased data re-use (licensing of data) 6
3 Allocation of Resources 6
3.1 Estimated Costs 6
3.2 Responsibilities 8
3.3 Long Term Preservation 8
4 Data Security 9
4.1 Data Storage and Preservation 9
4.2 Data Retention and Destruction 9
4.2.1 Scope 9
4.2.2 Classification and marking of research project data 9
4.3 Notification of Criminal Activity 12
5 Legal and ethical Aspects 12
5.1 Data Collection, Exchange, and Access 12
5.2 Personal Data 14
5.3 Informed Consent Procedure 17
5.4 Ethics Advisory Board 17
iv
# Introduction
This version of the Data Management Plan (DMP) is the first iteration released
in project month 6. The DMP will be updated regularly during the course of the
project. The final version will be released at the end of the project runtime
(D8.3). The document has been created by AIT, the project partner in charge of
the project data management task (T8.9), in consultation with all other
project partners. This DMP complies with H2020 requirements [1].
The DMP specifies procedures on the **collection, storage, access, and sharing
of research data within TITANIUM** . Research data in TITANIUM includes
synthetic and real-life data (see section 1).
The DMP categorises the various types of data produced in TITANIUM and it
specifies how data will be made available. The DMP describes:
* the data sets that will be collected, processed or generated by TITANIUM,
* the data management life cycle for all aforementioned data sets,
* the methodology and standards following which data will be collected, processed or generated,
* whether and how this data will be shared and/or made open,
* how the data will be curated and preserved, and • the procedure for notification of criminal activity.
The consortium partner **AIT** is responsible for implementing the DMP and
ensures that it is reviewed and revised during the project runtime. Regular
check points on the status of the data will ensure that the Data Management
Plan is implemented as foreseen, and that the procedure for notification of
criminal activity is strictly complied with. Checking both status of data and
notification of criminal activities are added as standing agenda points in the
bi-weekly EB calls.
This DMP does refer to social, legal, ethics and privacy aspects, which are
addressed in more detail and handled in context of WP1 and WP2, and under
supervision of the Ethics Advisory Board (EAB). Due to the privacy and data
protection sensitive nature of TITANIUM this DMP has strong dependencies with
various deliverables from these WPs, most of which are still upcoming or work
in progress. This and other factors such as important changes that might occur
to the project (e.g. inclusion of new datasets, changes in consortium
policies, external factors) will require regular updates of the DMP. A final
update will be released at project Month 36 (D8.3).
# Data Summary
The purpose of the data collection in TITANIUM is to develop and validate
novel data-driven techniques and solutions designed to support Law Enforcement
Agencies (LEAs) charged with investigating criminal or terrorist activities
involving virtual currencies and/or underground markets in the darknet. In
this context, TITANIUM will **collect** **publicly available data in
compliance with ethical, legal, social and privacy/GDPR regulations** (see
section 5). The data will only be collected by researchers and for research
purposes and not for forensic purposes. **It will not be used to conduct
criminal investigations and will not be shared with LEAs.**
TITANIUM operates under a strict scientific paradigm. Researchers involved in
TITANIUM only use and operate with **research data** , which includes real-
life data collected from the dark web and synthetic data (see below).
**Research data does not include real-life data by LEAs.** Only in the Field
Labs (WP6) the participating LEAs will, if applicable, make use of real
investigations and related data. However, said data will never leave the
premises of the participating LEAs.
TITANIUM will collect **real-life data** about entities and relationships
aggregated from relevant sources (e.g., underground markets, dark web forums,
TCP/IP level metadata), generate and visualise networks from the aggregated
data. At this point in the project we know that we will collect the following
data for research purposes:
* Publicly available Blockchain data from Bitcoin and eventually other virtual currencies (various database formats)
* Scrapes of publicly available Darknet material carried out within the project (HTML)
* Publicly available scrapes and dumps of darknet markets, but only to the extent that they derive from whitelisted sources with clear provenance (HTML)
Next to the collection of real-life data, TITANIUM will provide **synthetic
data** in order to test new investigation approaches. This will be necessary
in the absence of a sufficient number or real cases or if the use of real data
is precluded for legal, ethical, or security reasons. For example, we will
probably use synthetic data to simulate a forensic analysis of seized devices.
This data may be entirely synthetic or derived from real world data. If
derived from real world data it will not be possible to learn anything about
identifiable natural persons from the synthetic data.
In particular, the datasets (real-life and synthetic) will be used to fine-
tune, train and test the algorithms of WP4 and WP5, and in the WP6 Field Labs
(where applicable real data; otherwise synthetic data). The size of the
dataset used in TITANIUM will supposedly be on the order of hundreds of
gigabytes to tens of terabytes. However, this is just a first estimation that
will be updated in the next DMP update.
TITANIUM outcomes are useful for European LEAs. However, due to privacy and
data protection regulations data access and re-use will be restricted to the
TITANIUM consortium partners. All partners who will act as either Data
Controllers or Data Processors have provided and signed Data Protection
Statements, which comply with European data protection regulations.
## Data types and origins
TITANIUM will generate/collect the following types of data:
<table>
<tr>
<th>
**Type**
</th>
<th>
**Description**
</th>
<th>
**Origin**
</th>
<th>
**Format**
</th> </tr>
<tr>
<td>
Raw data
</td>
<td>
* Crawled datasets gathered by the collection services
* Datasets extracted by forensics tools
* Ground-truth datasets (e.g., known mixing service patterns or dark market actor attribution cases)
* Datasets matching realworld identities with addresses of virtual wallets
</td>
<td>
Publicly available
Blockchain data from
Bitcoin
Scrapes of publicly available Darknet material carried out
within the project
Publicly available scrapes and dumps of darknet markets, but only to the
extent that they derive from
</td>
<td>
CSV
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
whitelisted sources with clear provenance
</th>
<th>
</th> </tr>
<tr>
<td>
Synthetic test data
</td>
<td>
Synthetic test data sets produced by simulators for the evaluation of the
applied methods avoiding potential privacy or security issues when being
shared across borders
</td>
<td>
Entirely synthetic
OR
Derived from anonymised real-world
data complying with
GDPR
</td>
<td>
</td> </tr>
<tr>
<td>
Knowledge graphs
</td>
<td>
Networks (knowledge graphs) constructed from the aggregated data (organisation
and market networks)
</td>
<td>
Publicly available
Blockchain data from
Bitcoin
Scrapes of publicly available Darknet material carried out
within the project
Publicly available scrapes and dumps of darknet markets, but only to the
extent that they derive from whitelisted sources with clear provenance
</td>
<td>
database
</td> </tr>
<tr>
<td>
Interview data
</td>
<td>
Data from interviews with the Field Lab participants
</td>
<td>
Field work: Interviews
</td>
<td>
Open Office or MS Word (docx/doc)
</td> </tr> </table>
A **collection of data sets containing the ground-truths and synthetic data**
collected over the course of the TITANIUM project will be included in D4.4:
Synthetic and ground truth data sets [M18, COB, Data + Report].
**Initial and final results and analysis of the data collected during the
Field Labs** will be included in D6.3: Initial Field Lab studies analysis and
evaluation results [24] and D6.4: Final Field Lab analysis and evaluation
results [36].
## List of partners who will act as either Data Controllers or Data
Processors
<table>
<tr>
<th>
**Partner no**
</th>
<th>
**Institution**
</th> </tr>
<tr>
<td>
**1**
</td>
<td>
AIT AUSTRIAN INSTITUTE OF TECHNOLOGY GMBH (AIT)
</td> </tr>
<tr>
<td>
**2**
</td>
<td>
NEDERLANDSE ORGANISATIE VOOR TOEGEPAST NATUURWETENSCHAPPELIJK ONDERZOEK TNO
(TNO)
</td> </tr>
<tr>
<td>
**3**
</td>
<td>
UNIVERSITAET INNSBRUCK (UIBK)
</td> </tr>
<tr>
<td>
**5**
</td>
<td>
UNIVERSITY COLLEGE LONDON (UCL)
</td> </tr>
<tr>
<td>
**6**
</td>
<td>
FUNDACION CENTRO DE TECNOLOGIAS DE INTERACCION VISUAL Y COMUNICACIONES
VICOMTECH (VICOM)
</td> </tr>
<tr>
<td>
**7**
</td>
<td>
COBLUE CYBERSECURITY BV (COB)
</td> </tr>
<tr>
<td>
**8**
</td>
<td>
DENCE GMBH (DEN)
</td> </tr>
<tr>
<td>
**10**
</td>
<td>
COUNTERCRAFT SL (CCR)
</td> </tr>
<tr>
<td>
**11**
</td>
<td>
BUNDESKRIMINALAMT (BKA)
</td> </tr>
<tr>
<td>
**12**
</td>
<td>
THE INTERNATIONAL CRIMINAL POLICE ORGANIZATION (INT)
</td> </tr>
<tr>
<td>
**13**
</td>
<td>
NATIONAL BUREAU OF INVESTIGATION (NBI)
</td> </tr>
<tr>
<td>
**14**
</td>
<td>
BUNDESMINISTERIUM FUER INNERES (BMI)
</td> </tr>
<tr>
<td>
**15**
</td>
<td>
MINISTERIO DEL INTERIOR (MIR-PN)
</td> </tr> </table>
# FAIR Data
## Making data findable, including provisions for metadata
Due to complications brought about by privacy and data protection issues,
TITANIUM opted out of the Pilot on Open Research Data. Nevertheless, data
management is an important aspect of TITANIUM insofar as it supports and
promotes system and organisational interoperability between cooperating LEAs,
while at the same time ensuring that data protection and privacy regulations
are fully enforced.
Therefore, the findability aspect of the data, in the context of FAIR, is only
relevant to the TITANIUM consortium partners and stakeholders. In context of
WP4 (Task 4.6) a data registry will be created. However, this task has not
started yet, so we cannot identify the requirements yet.
WP3 (Task 3.4) will identify **data sharing and provenance tracking** needs
within and between configurable components and develop corresponding
conceptual and technical models. This includes:
* **User contributed contextual information** , such as tags or annotations attached to virtual currency addresses (this data has to be regarded personal data itself)
* **Provenance data structures** that support case-specific compliance, replicability of analyses, and the generation of verifiable audit trails
One of the options we will investigate is the use of Blockchain technology to
setup a provenancepreserving and privacy-aware dataset ledger, in line with
recently proposed out-of-domain applications of blockchain technology.
Relevant deliverables in this context:
* D2.1: Identification and analysis of the legal, societal and ethical impacts of the TITANIUM tools [M12, KIT, Report]
* D2.2: Analysis of INTERPOL’s legal framework and identification of requirements [M12, INT, Report]
* D2.3: Legal and ethical conditions for the conduct of the Field Labs [M18, KIT]
* D3.4: Report on data sharing and provenance tracking models [M18, AIT, Report]
* D4.6: Dataset registry service [M36, AIT, Software]
## Making data openly accessible
As explained above, **TITANIUM opted out of the Pilot on Open Research Data**
. Therefore the accessibility aspect of data, in the context of FAIR, is only
relevant to the TITANIUM consortium partners and stakeholders.
* All partners who will act as either Data Controllers or Data Processors (see above) have provided and signed Data Protection Statements.
* The TITANIUM Field Labs will, if applicable, make use of real investigations and related data, but said data will never leave the premises of the participating LEAs.
* The collected publicly available data will only be used for research purposes and not for forensic purposes. The collected data will not be shared with LEAs.
A **dataset registry service** to facilitate sharing and exchange of research
data among project participants and associated stakeholders will be set up in
context of WP4. The distributed registry service keeps a record of datasets
produced within the TITANIUM project and facilitates data sharing and
synchronization among participants and associated stakeholders, adhering to
legal and privacy restrictions. Mechanism for logging access to the data, as
well as regulations with respect to copying and sharing the data will be
addressed as well in WP4.
**Standardised technical interfaces between TITANIUM service containers** to
be specified in WP3 and developed in WP4 and WP5 will ensure interoperability
in the execution of the modules. Among other, this will include a usable and
compact **storage and access model for bulk data** containing relevant
evidence (market prices, offers, transactions, entities, media files, metadata
etc.).
Further details in context of data accessibility will be clarified once Task
4.6 has started (e.g. conditions for access, machine readable license, how the
identity of the person accessing the data will be ascertained). We might
consider containing the list of persons that is allowed to access the data in
the blockchain as well.
## Making data interoperable
TITANIUM research data will, if possible, be stored in open formats, which
will allow broad re-use of the datasets (to the extent that this is consistent
with privacy and data protection guidelines; refer to Legal and ethical
Aspects). The following **standards for data curation and interoperability**
will be employed:
* In general the work packages will seek to store the data generated/collected during the project in open formats. An open format is defined as “one that is platform independent, machine readable, and made available to the public without restrictions that would impede the re-use of that information.”
* Aggregated data will be stored in CSV files according to the RFC 4180 specification, JSON, and XML, whenever possible.
* We will also seek compatibility with the import/export functions of other forensics platforms, in particular those under development in the ASGARD and BITCRIME projects.
The public availability of selected final publications will enhance
transparency and reproducibility of the applied methods and achieved results.
This approach will support interoperability between LEAs across Europe when it
is required.
## Increased data re-use (licensing of data)
Due to complications brought about by privacy and data protection
requirements, re-use of data, metadata and analysis results is restricted. It
will be investigated if less sensitive data (e.g. synthetic or those compiled
from open sources and samples without association to real persons) can be
stored on a trusted platform (e.g. Cambridge Cloud Cybercrime Center 1 )
under restricted access.
# Allocation of Resources
## Estimated Costs
The core services required for data management are
* Content Management Service (CMS)
This is a system that allows managed access to data and metadata
* File Service (FS)
This is the underlying storage network for digital files and includes the
back-up services necessary for digital preservation
* Data Sharing Platform (DSP)
This is the discovery and distribution service that allows consortium members
to share nonclassified information
* Software Sharing Platform (SSP)
This is the management system for project software, which includes versioning,
release management, and issue tracking
It is assumed that all partners will implement CMS+FS locally for the
management of their local projectrelated data. As the project coordinator, AIT
will in addition provide the Data Sharing Platform (based on the open source
Redmine tool) for the consortium. As the project technical coordinator, TNO
will provide the software sharing platform, based on the open source tool
GitLab.
The estimated costs for data management within the TITANIUM project are based
on known real costs at the coordinating institution (AIT), extrapolated for
each partner based on the size of the institution and its geographic location.
_Figure 1: AIT monthly costs for IT services_
Figure 1 indicates the AIT monthly costs per instance for various IT services,
including File Services (FS: €168.62/Month), Content Management Services (CMS:
€36.08/month), and the Data Sharing Platform (RPS, or Redmine Project Service:
€43,17/month). These figures are from 2015; but we assume that inflation
versus IT performance (e.g. storage costs per unit volume) will lead to fairly
stable costs over the three-year project lifetime of TITANIUM.
_Figure 2: AIT monthly costs for file service_
Figure 2 shows a breakdown of the File Service costs at AIT. The largest
contributing factor is the cost of the EMC storage area network, but other
costs, including system administration and backups, are also included in the
analysis. The point of this figure is to demonstrate that costs at AIT are
wellunderstood; the primary uncertainties in the overall cost analysis are (a)
how well to the known costs for AIT predict similar costs at partner
institutions, and (b) to what extent can the costs for an instance of a given
service be fully attributed to a given project.
_Table 1: Estimated Data Management Costs for TITANIUM_
<table>
<tr>
<th>
**Partner**
</th>
<th>
**Services**
</th>
<th>
**Estimated Costs per month**
</th> </tr>
<tr>
<td>
AIT
</td>
<td>
CMS, FS, DSP
</td>
<td>
€ 250.00
</td> </tr>
<tr>
<td>
TNO
</td>
<td>
CMS, FS, SSP
</td>
<td>
€ 250.00
</td> </tr>
<tr>
<td>
UIBK
</td>
<td>
CMS, FS
</td>
<td>
€ 150.00
</td> </tr>
<tr>
<td>
KIT
</td>
<td>
CMS, FS
</td>
<td>
€ 150.00
</td> </tr>
<tr>
<td>
UCL
</td>
<td>
CMS, FS
</td>
<td>
€ 200.00
</td> </tr>
<tr>
<td>
VICOM
</td>
<td>
CMS, FS
</td>
<td>
€ 100.00
</td> </tr>
<tr>
<td>
COB
</td>
<td>
CMS, FS
</td>
<td>
€ 50.00
</td> </tr>
<tr>
<td>
DEN
</td>
<td>
CMS, FS
</td>
<td>
€ 50.00
</td> </tr>
<tr>
<td>
TRI
</td>
<td>
CMS, FS
</td>
<td>
€ 100.00
</td> </tr>
<tr>
<td>
CCR
</td>
<td>
CMS, FS
</td>
<td>
€ 50.00
</td> </tr>
<tr>
<td>
BKA
</td>
<td>
CMS, FS
</td>
<td>
€ 200.00
</td> </tr>
<tr>
<td>
INT
</td>
<td>
CMS, FS
</td>
<td>
€ 200.00
</td> </tr>
<tr>
<td>
NBI
</td>
<td>
CMS, FS
</td>
<td>
€ 50.00
</td> </tr>
<tr>
<td>
BMI
</td>
<td>
CMS, FS
</td>
<td>
€ 100.00
</td> </tr>
<tr>
<td>
MIR-PN
</td>
<td>
CMS, FS
</td>
<td>
€ 100.00
</td> </tr>
<tr>
<td>
**Total costs per month**
</td>
<td>
**€ 2,000.00**
</td> </tr>
<tr>
<td>
**Total costs project lifetime**
</td>
<td>
**€ 72,000.00**
</td> </tr> </table>
The results of our estimate are shown in Table 1. We estimate monthly costs of
approximately twothousand euros for data management for the consortium. Over
the 36-month lifetime of the project, this leads to a total estimated cost of
€72.000.
These costs are covered by overhead costs at each institution. However, it
should be noted that these costs will continue well beyond the lifetime of the
project.
## Responsibilities
The consortium partner **AIT** is responsible for implementing the DMP and
ensures that it is reviewed and revised during the project runtime.
**Name and contact details** of the person responsible on behalf of the
beneficiary AIT during the project runtime:
Michela Vignoli
AIT Austrian Institute of Technology GmbH
Digital Safety and Security Department
Donau-City-Straße 1 1220 Vienna
[email protected]_
+43 50550-4216
**All partners** who are either controlling or processing data are responsible
for managing the data that is provided to/processed in context of TITANIUM.
**Data Controllers and Data Processors** will be responsible for storing,
preserving, backing up, retaining or destroying the data provided for use in
TITANIUM according to security and data protection regulations in place.
## Long Term Preservation
Due to complications brought about by privacy and data protection
requirements, long term preservation is not applicable to TITANIUM.
# Data Security
## Data Storage and Preservation
Data used in TITANIUM will not be centrally stored or managed. Each partner
who is either controlling or processing data is responsible for managing the
data according to security and data protection policies in place.
The data will be stored at each partner’s premises according to the data
protection policies of these partners. The data protection policies of each
partner have been collected and added as an annex to the institutional Data
Protection Statements (refer to D1.1). The partners’ data security and
protection policies comply with the following general policies defined for
TITANIUM:
* The institution has the necessary technical resources and an operations team in place for storing, preserving, and protecting TITANIUM data
* The institution has appropriate data access and security measures in place, e.g. access right management, risk management, secure access and transfer of data
* The institution has appropriate storage and backup measures in place, e.g. system related backups, backup procedures
In the interest of open research data, we will investigate the possibility to
find a permanent home for less sensitive data used in TITANIUM (e.g.,
synthetic or those compiled from open sources and samples without association
to real persons that cannot be de-anonymised). E.g., it will be considered to
liaise with the Cambridge Cloud Cybercrime Center 2 .
## Data Retention and Destruction
This draft policy (in document version 1.0) is based on best-practices
developed in other H2020 security-related projects (refer to D1.3: POPD -
Requirement No. 4).
### Scope
This procedure applies all TITANIUM researchers who require the retention of
research records or documentation. In particular, attention must be paid to
the classification of data and documentation, restrictions on storage and
electronic transmission and each consortium partner’s responsibilities under
national legislation.
### Classification and marking of research project data
#### Marking hard copy research project data and documentation
* The TITANIUM policy on protecting information has three classifications or markings: open, protect and control.
* All research project data must be classified and marked as “protect” as a minimum standard; sensitive personal data or data which might lead to significant personal distress if disclosed should additionally be classified and marked as “control”.
* Chief Investigators must ensure that all research project data is appropriately marked, in line with the guidance provided by TITANIUM
#### Storing hard copy research project data and documentation
* All hard-copy research project data and documentation must be stored in compliance with TITANIUM policies and guidance and must be classified and marked appropriately.
* All original hard-copy project data must be held securely; irreplaceable data (e.g., consent forms, interview notes, returned questionnaires) and data from which individuals might be identified (e.g., contact details) must be kept in a locked drawer or equivalent when not in use and should not be removed from the premises of the researcher.
* Administrative or supporting documentation (e.g., letters of approval, original protocol/application form, updates and amendments) should be held in a project folder in a secure location; master files which include participant/sample and other coding information must be treated as project data and kept in a locked drawer when not in use.
* All hard-copy documentation should be stored in a manner which facilitates legitimate use and access; file names should be logical and relevant; version control is critical and it should be clear which version of any document is the most recent or currently approved for use.
#### Storing electronic research project data
* All electronic research project data must be stored in compliance with TITANIUM policies and guidance and must be _**classified and marked appropriately** . _
* All original, irreplaceable electronic project data and electronic data from which individuals might be identified must be stored on appropriately supported media, preferably appropriate centrally-allocated secure server space or similar; such data must never be stored on portable devices or temporary storage media. TITANIUM protocols for the storage of classified information must be followed.
* All other electronic project data must be held on appropriate centrally-allocated secure server space which is accessible to members of the project team; such data must not be held on personal or portable devices unless these are encrypted in line with TITANIUM requirements and except when this is necessary for the purposes of working off-site; amended documents must be returned to the appropriate TITANIUM maintained shared space when the work has been completed.
* Under no circumstances should original, irreplaceable data or sensitive personal data be stored using cloud storage services as this can place data outside UK and EU legal control.
* All electronic data should be stored in a manner which facilitates legitimate use and access; file names should be logical and relevant; version control is critical and it should be clear which version of any document is the most recent or currently approved for use.
#### Controlling access to research project data
* Research project data should be stored as indicated above and should be protected by password, encryption (for electronic data) or lock and key (hard copy).
* Research project data, whether electronic or hard-copy, should be accessible only to those people who have a legitimate purpose, including members of the project team, internal and external auditors and representatives of regulatory bodies.
* Members of staff, students and people external to TITANIUM who do not fall into the categories above should not be given access to research project data without good reason and/or prior permission.
* Research project data must be maintained in such a way that access is only possible by persons allowed to access the data.
* Requests for access to research data by parties not part of TITANIUM above must be directed through the Chief Investigator in the first instance.
#### Archiving and disposal of research project data
* All research project data, following the end of the ten-year data retention default period, may be retained by the researcher, under condition that it is pseudonymised or anonymised.
* Applications for archiving research data must be made in line with the process described in the TITANIUM archiving procedure.
* Research project data that is no longer required may be destroyed, subject to the demands of the publication cycle, continuing or follow-on projects or the requirements of the European Union.
#### Handling personal information and European Law
* TITANIUM is bound by, and all researchers are required to be aware of and to adhere to the provisions of, European Law relating to data protection.
* The primary function of data protection law is to ensure that an individual’s personal information is used lawfully (particularly only with a legal basis, in case of consent, only as agreed with that individual) and is not used for any purposes incompatible with those for which the data has been collected, including forward transmission to a third party.
* The following principles should be adopted when handling personal information:
* data should be anonymised at the point of collection where possible;
* all other data, unless there is a specific reason for maintaining an identifying link, should be anonymised as soon as possible after collection;
* data that cannot be anonymised must be held securely and in confidence, with coded access as appropriate, for a pre-set period of time which is subject to the consent of the individual concerned and the needs of the study or potential future requirements;
* anonymous, raw data should be transcribed or transferred for analysis as quickly as possible after collection;
* once transcribed or transferred, the data should be stored securely in condensed form; o as soon as transcription and validity have been assured to the satisfaction of the Chief Investigator, the raw data may be destroyed, subject to the requirements of any contract, funding body or other interested party, although there is no absolute requirement relating to this;
* condensed or interim data must be handled and stored as indicated in the Code of Practice; o interim data, all other related materials and results will remain the responsibility of the researcher or other appropriate individual until it can be demonstrated that: the project has been completed to the satisfaction of the funder/sponsor, any publications have been finalised to the satisfaction of all concerned and/or until any queries relating to the project have been addressed; and
* all retained information relating to the project should then be transferred to a secure local repository (data store or similar), central TITANIUM archive or specified data bank for the remaining period specified in the Code of Practice.
#### Breach of Data Protection
• Any unauthorised or unlawful processing, accidental loss or destruction of,
or damage to, personal information held by any TITANIUM researcher in both
electronic and hard copy format will constitute a breach of TITANIUM Data
Protection. Any such breaches must be reported to the project Data Protection
Officer (DPO) and the relevant national authority.
## Notification of Criminal Activity
The Procedure for Notification of Criminal Activity is defined in D1.9: OEI -
Requirement No. 11. The revised version from 2019-07-25 includes additional
information and examples with regards to when the notification of the DPO has
to occur.
Member states of the EU are independent and autonomous legal bodies with
individual legislative powers in the areas of criminal law and criminal
procedure law. Influences from the European level exist through “due process
of law” principles and general cross-border related topics such as mutual
recognition of judicial decisions, avoiding conflicts of jurisdiction,
improving exchange of information, etc. Criminal law is still subject to the
autonomous decision of the member states and not directly governed by EU
legislation. The legislative powers cover the territory of the autonomous
state and are generally bound to geographical borders. The resulting
differences in criminal law between countries represented in TITANIUM lead to
different obligations regarding the notification obligation within the
project. Following this principle of territoriality, cross-jurisdiction work
can result in different obligations depending on the consortium members
whereabouts. Most member states included an obligation to report criminal
activity through a crime of omission. The specific legal conditions for the
respective member states represented in TITANIUM are described in detail in
D1.9: OEI - Requirement No. 11, Procedure for Notification of Criminal
Activity. All members of the project are required to read and understand the
notification process as laid down in D1.9 carefully to be sufficiently
informed about their legal obligations in their context of work
In general, if a consortium member observes any criminal activity, whether it
is planned to be committed or has been committed, the member or participant
1. must notify a competent authority, if the applicable national or international law stipulates an obligation to notification/notify.
2. shall not notify any authority, if the applicable law does not stipulate an obligation to notification.
In all cases where notification was put into consideration or was conducted,
the consortium member will inform the project’s DPO pursuant to the procedure
described in detail in D1.9. The DPO will assist the partner in his/her
decision and in evaluating the necessity of authority notification. The
decision progress and reasoning for a notify/not-notify decision is tracked.
Tracking cases of obvious nonnotification is not foreseen. To avoid omission
of required notification, TITANIUM partners are trained and informed on a
regular basis to allow a correct first assessment of their notification
obligation in the concrete situation. While each consortium member is
individually liable to adhere to the national regulations, the projects legal
team and DPO will regularly inform and remind the consortium members about
their legal obligations to report certain criminal activity.
# Legal and ethical Aspects
## Data Collection, Exchange, and Access
Our data collection approach, both in terms of policy and implementation, is
itself a primary project outcome, and is being documented in various project
Deliverables, such as D2.1 (M12), D4.2 (M30), D4.3 (M24), D4.5 (M36), D5.3
(M30), D5.4 (M36), D5.5 (M30), and D5.6 (M36). We operate under a scientific
paradigm, which allows us to gather data for research purposes. **The data
will not be used for forensic purposes.**
Research data collection will at all times be constrained by the policies
elaborated in work packages 1 and 2, in particular through D1.1 (M1+updates),
D1.2 (M1+updates), D2.1 (M12), D2.3 (M18), and D4.1 (M18). In terms of data
collection, processing and retention we operate in accordance with the GDPR.
TITANIUM primarily processes pseudonymous data from public sources for
research purposes. As specified in our data protection statement on the
project website 3 , **all** data subjects have the right to request access
to and rectification or erasure of personal data or restriction of processing
concerning the data subject and to object to processing as well as the right
to data portability.
Legal and ethical issues related to the intended data collection, data
exchange, and access to relevant data by law enforcement through the TITANIUM
tools are addressed in WP2. Next to the legal issues, to various ethical
issues are being addressed in this context. In Deliverable D2.1 legal impacts
of TITANIUM as well as significant ethical issues are discussed more in
detail. Appropriate safeguards are proposed to minimise any negative impacts
of TITANIUM, while maintaining the operational excellence of the tools. In
terms of ethical issues, the technical requirements of TITANIUM tools should
at least address security, data minimisation, confidentiality, purpose-binding
(access control), and transparency (logging, integrity protection)
constraints. 4 Throughout the project, a value-sensitive design, ensuring
adequate ethical safeguards, is being pursued. 4
The current legal situation and open legal questions regarding data collection
of public data and the use of forensic tools to study the underground markets
is being analysed different technical solutions will be proposed and
implemented in WP4, in which tools and services for the automated collection
of multi-modal data are being developed. They will most probably relate to,
for instance, smart crawling technologies, on demand crawling (crawling only
if there is a specific suspicion) or archiving of the blockchain in a specific
situation, which will depend on the legal basis in the national laws of the
partners.
WP4 will implement **privacy-preserving control measures** into the TITANIUM
data collection tools/services to ensure compliance of the data collection
process with applicable privacy law. A practical privacy-aware control
structure for the operation of the tools will be developed (privacypreserving
data obfuscation tools; tuneable computing architectures for the various
planned machine learning components). With this control structure, it will be
possible to effectuate certain privacy legislation on the level of tool
operation.
**D4.1** (M18) describes the approach suggested by TITANIUM to minimize data
collection and storage when crawling Darkweb marketplaces and forums, with the
aim to respect the privacy of individuals as much as possible. The first and
most important concept of data minimization involves limiting the depth of
related entities that is presented to the end user (LEA officer). This is
relevant in a situation where a concrete starting point for an investigation
on dark web and/or related virtual currency information is used to
automatically crawl and scrape information. This concept is being implemented
in the so-called ‘ephemeral monitor’.
The second concept entails hiding all personally identifiable information by
recognizing them and encrypting them. Building upon the work of the FP7
project PRACTICE, it is acceptable (under the right circumstances) to encrypt
personal information and regard this encryption as ‘deleting personal
information’. The so-called persistent monitor will implement this data
minimization method and will act as a secure index to the ephemeral monitor.
As the TITANIUM tools are envisaged to be used under different legal bases, a
common configuration model will be developed. This model enforces the
compliance of tools with the specific requirements in every instantiation of
the TITANIUM tools where the country, case or suspicious facts, and legal
bases for data collection and sharing are known. The model may leverage
INTERPOL’s existing approach to data processing on a global stage. It includes
at least security, data minimization (confidentiality), purpose-binding
(access control), and transparency (logging, integrity protection)
constraints.
TITANIUM has been designed with a tools-only approach – that is, it will not
perform any lawenforcement functions, or provide direct data analysis or
forensic services for law enforcement, but rather it will conduct research to
develop forensic tools that will then be passed on to appropriate authorities
to use based upon their appropriate legal frameworks. The tools will be
designed to support their use in particular legal contexts, and at this point
any data collected by those tools will not be collected by the consortium, but
rather by enforcement where there is a clear legal basis - for example, in the
context of an ongoing investigation. The main purpose of data collection is to
fine-tune, train and test the algorithms of WP4 and WP5.
WP2 also analysed legal and ethical conditions for the conduct of the Field
Labs, in which TITANIUM data will be used and processed. The research focuses
on questions of a possible compliance of the planned field tests and developed
tools with EU data protection law, such as Directive (EU) 2016/680 on
processing of personal data in the law-enforcement sector, the General Data
Protection Regulation, where applicable, as well as Articles 7 and 8 of the
Charter of Fundamental Rights of the European Union (CFR) and Article 8 of the
European Convention on Human Rights (ECHR), Council of Europe Convention 108
from 1981 and Recommendation No. R (87) 15 and the respective case law of the
CJEU and the ECtHR. WP2 will assess if a legal basis exists to carry out the
field tests with real data, or if they can only be carried out based on the
use of synthetic data instead. To ensure compliance with national data
protection and criminal law provision, the WP2 team will closely cooperate
with national experts and data protection officers of the law enforcement
partners involved.
Relevant deliverables in this context are:
* D1.3: POPD - Requirement No. 4
* D2.1: General legal, societal and ethical impacts of the TITANIUM tools [M12, KIT, Report]
* D2.2: Analysis of INTERPOL’s legal framework [M12, INT, Report]
* D2.3: Legal and ethical conditions for the conduct of the Field Lab tests [M18, KIT, Report]
* D2.4: Report on legal, societal and ethical impact assessment [M36, TRI, Report]
* D4.1: Privacy-preserving control techniques [M18, TNO, Software]
* D4.3: Adaptive data collection containers [M30, TNO, Software]
## Personal Data
Throughout the project lifetime, TITANIUM will use only publicly available
data (i.e. data published in virtual currency blockchains or on Internet dark
market sites) compliant with ethical, legal, social, and privacy/GDPR
regulations, or will make use of synthetic data (refer to Grant Agreement,
Part B, Section 5.1). The data will only be collected by researchers and for
research purposes, and not be used for forensic purposes.
However, TITANIUM's research methods will include the processing of three
categories of data which may raise personal data protection requirements; 1)
cryptocurrency data, and 2) other online data sources (e.g. darknet
markets/for a), 3) synthetic data on “local” digital forensic evidence.
TITANIUM does not envisage the use of real-world data in context of 3), and
this category of data does not therefore present data protection risks in this
project. The other aspects are being addressed in WP2.
TITANIUM will not process special categories of personal data (e.g., health,
sexual lifestyle, ethnicity, political opinion, religious or philosophical
conviction) within the meaning of the GDPR. This exemplary list is based on
Article 9 GDPR 5 that describes data that is deemed particularly sensitive
by the European legislator. TITANIUM implemented measures to exclude such data
from the processing during research as far as necessary. Cryptocurrency
transaction data, which is processed in context of TITANIUM, is not listed as
a special category of data. Nevertheless, financial transaction can be
“sensitive”. To determine the “sensitivity” of data, one must differ between
data that has deliberately been made available to the public by the data
subject and data that has not been made public by the data subject.
In the case of cryptocurrency transaction data, the data subject has a lowered
expectation that the transaction data is not accessed, seen, or even processed
by third parties. It is consensus that data that is voluntarily made public by
the data subject – be it on a blockchain, the darknet or the surface web – is
deemed less sensitive by the data subject than private (i.e. non-public) data.
The data subject voluntarily decided to make information publicly available
and can hence not expect that the data is not accessed/seen by the public 6
. The EU legislator has ever since followed, a “non-paternalistic” approach
that allows data subject to decide freely about how to make use of their data.
As a consequence, **the data subject is required to individually evaluate the
risks of their actions (i.e. publication of data)** . To ensure the data
subject is able to enforce its decisions the GDPR implemented a broad range of
measures to enforce the decisions (e.g. rectification, erasure, information).
Accordingly, **participants of blockchain based cryptocurrencies can be
expected to be aware of the public availability of their transaction data and
potential further processing** . Public transaction data is hence deemed less
sensitive than e.g. FIAT transactions. Cryptocurrency users have a lower
expectation of sensitivity of their transaction data – that **does not** mean
they have a lowered expectation of anonymity. In contrast, transactions with
FIAT money are subject to an expectation of sensitivity/confidentiality but
not to anonymity in relation to banking services. The data subject does not
expect anybody else except the bank and the receiver to see/process his
transaction data. In such cases, the transaction data is deemed more sensitive
7 . Expectations towards **accessibility and re-use of published data** by
users need to be differentiated from their **privacy** **expectations** . In
the given context, the latter describes the users’ expectation that his
publicly available data cannot be traced back to him/her. This difference also
exists – and may be more comprehensible – in the context of the darknet
content. In this case, the user has a very high privacy expectation (i.e.
expectation to remain anonymous). She does not have a high expectation that
her postings and comments on a publicly accessible webpage remain unseen by
the public eye 8 . This, of course, does not hold true for closed
communities, where the author only expects a specific group or person to see
his post/comment/message. TITANIUM does **not** access closed communities or
private communication.
To support this, there is a long line of jurisdiction 9 as well as scholarly
debate not only in the EU but also in the US how far-reaching privacy
expectations of the data subjects are in other public contexts. While these
sources are not particularly aimed at publicly available data from the
blockchain or the darknet, the comparison shows that legislation and
jurisdiction currently do not particularly restrict access to publicly
available data and expect data subjects to be sufficiently careful which
information they provide. With regard to scientific data processing this
“mindset” can also be observed in other legal contexts, e.g. in the novel
copyright directive (EU 2019/790), that constitutes an exemption for text and
data mining for scientific purposes with regard to potential copyright
infringements.
Having said that, the public availability of data does not make them a “carte
blanche” for data processing. The privacy concerns (i.e. the wish to remain
anonymous) of the users must be heard and addressed in the research context,
and research projects must adhere to the high data protection standards of the
GDPR. The GDPR generally privileges data processing for scientific purposes
and sets ups specific rules for data that is not obtained from the data
subject, as is the case in TITANIUM. The data processing (collection and
analysis) is based on Art. 6 (1) lit. f GDPR. In compliance with the GDPR,
TITANIUM informs the affected data subjects pursuant to Art. 14 (1) GDPR in an
easily understandable manner. This means that people are informed about the
project’s activities on its public website in a specific privacy statement, in
news and updates on the findings and with references to scientific
publications. To support this, the consortium created an “easy-to-read”
version of its privacy statement to ensure everyone is able to understand the
projects goals and approaches. Further, the projects partners are in touch
with the relevant communities to ensure all interests are considered. With
regard to privacy expectations of the affected data subjects two points need
to be noted. First, TITANIUM respects the wish of users to remain anonymous
and does not identify individual users neither in cryptocurrency nor in
darknet context.
Second, TITANIUM does not identify itself as an antagonist to the
cryptocurrency or the darknet communities. On the contrary, the project
produces novel valuable insights on possible analysis techniques of
cryptocurrency protocols and the darknet, helping the communities to
understand the risks of cryptocurrencies and tackle false expectations of
anonymity. TITANIUM participates in the ongoing discussion on how to handle
publicly available data and how to protect the privacy of the data subjects
not only by publishing research on legal, ethical, societal and technical
outcomes of the project but is also in continuous communication with relevant
stakeholders such as the cryptocurrency communities, civil society
organisations, banking officials and law enforcement agencies as part of D2.4
(PIA+) to ensure all interests are sufficiently put into consideration.
In conclusion, TITANIUM complies with the current legal and ethical framework
and additionally puts immense effort in balancing the interests of
cryptocurrency and darknet users, the science community and law enforcement
beyond the legal requirements that currently govern the use of publicly
available data. For the DMP (D8.2) the current legal and ethical situation is
decisive and must be distinguished from parallel (scholarly) discussions 10
that are led on legal, ethical and societal level. 12 If additional
sensitive personal data not covered by the above-named examples arise, we will
determine how to process/delete/anonymise it. This will be carried out in the
context of WP2 and under supervision of the DPO and the Ethics Advisory Board
(EAB).
TITANIUM will not transfer any personal data to non-EU countries. Based on the
findings in Task 2.2 on the INTERPOL framework, a transfer of some data via
INTERPOL might be considered. In general terms INTERPOL’s role is not as an
investigatory agency but rather a support to its member countries so the
countries remain in control of the data processed by INTERPOL with some
exceptions.
TITANIUM has set up a Data Protection Notification Procedure (refer to D1.1:
POPD - Requirement No. 1) and has nominated a Data Protection Officer (Refer
to D1.2: POPD - Requirement No. 2).
## Informed Consent Procedure
In context of the data collected in the interviews to be done in the Field
Labs (WP6) **informed consent documentation** will be collected from the Field
Lab participants (refer to Grant Agreement, Part B, Section 5.1). The informed
consent form will be designed in D1.7 and WP2 (Task 2.3).
TITANIUM anticipates collecting the following personal data from participants:
* Name - necessary to ensure that informed consent has been acquired from all participants. Interview transcripts will be anonymised.
* Job title/role - necessary for the analysis of the use of the TITANIUM tools by different roles within the LEA organisations.
* Contact email address - necessary to keep participants updated about the progress of the field labs, and any changes in research practices.
## Ethics Advisory Board
TITANIUM constituted an independent Ethics Advisory Board, who will have
access to TITANIUM's activities and all deliverables. It will supervise data
collection and processing and advise about potential impacts on human rights.
TITANIUM partners will inform and consult with the EAB when it comes to the
processing of research data and to assess if whether the datasets contain
sensitive or personal data.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1258_STOP-IT_740610.md
|
**Executive summary**
A new element in the Horizon 2020 is the use of data management plans
detailing what data the project generates and how this data is accessible. The
purpose of this Data Management Plan (DMP) is to provide an analysis of the
main elements of the data management policy that will be used by the STOP-IT
project with regard to all the datasets that will be generated or collected by
the project. This deliverable presents the first version of the DMP for the
STOP-IT project. First, it presents the key considerations made to ensure open
access to the project’s publications. Next, we describe the background for why
and how STOP-IT needs to be an open access project, influencing the overall
data management processes. This deliverable describes the data sets to be
gathered, processed and analysed. These data set descriptions follow the DMP
template provided by the European Commission. This template was circulated to
the project partners responsible for the different pilot studies to be
conducted, and partners completed the data set descriptions according to the
current plans for gathering and analysis of data as well as the methods and
processes foreseen to be applied to ensure compliance with ethics
requirements. In cases, in which open access to research data represents a
risk for compromising the privacy of study participants, data will not be
shared or made accessible. The DMP is not a fixed document, but evolves during
the lifespan of the project; in fact it functions as a dynamic document of
agreements.
**Data Management Plan**
1. **Introduction**
**1.1 Purpose of the document**
This Data Management Plan (DMP) describes the data management life cycle for
the data sets to be collected and processed by STOP-IT. The DMP outlines the
handling of research data during the project, and how and what parts of the
data sets will be made available after the project has been completed. This
includes an assessment of when and how data can be shared without disclosing
directly or indirectly identifiable information from study participants [1],
[2].
In principle, publicly funded research data are a public good, produced for
the public interest that should be made openly available with as few
restrictions as possible in a timely and responsible manner that does not harm
intellectual property and confidentiality. On this basis, the DMP intends to
help researchers consider at an early stage, when research is being designed
and planned, how data will be managed during the research process and shared
afterwards with the wider research community.
The DMP specifies the availability of research data, describes measures to
ensure data are properly anonymized to ensure the privacy of informants and
respondents, and to ensure the open data strategy does not violate the terms
made with the Communities of Practice operated by WP2. In any case, the
developed tools and models within the project (WP4) will not contain any case
or company specific information, sensitive data or related inputs and as such,
its availability will not be of threat to any water utility or other
infrastructure operator.
With regard to access to research data, STOP-IT will make the data and
metadata available on the project internal website research data repository.
Project members will, in this repository, have access to both data and
metadata. For the time being, research data is planned to be archived at the
the European repository zenodo.org to ensure re-use in future research
projects and follow-up studies.
With regard to open access to scientific publications, STOP-IT aims to publish
in open access journals (gold open access), and to establish links to
publications behind pay-walls available as final peer-reviewed manuscripts in
an online repository after publication (green open access). To ensure gold
open access, the STOP-IT project will place priority on relevant journals
choosing gold open access journals. With regard to the latter, following the
recommendations of the data management plan ensures we only submit our work to
journals with an easy access to third parties.
The benefits of a well-designed DMP not only concern the way data are treated
but also the successful outcome of the project itself. A properly planned DMP
guides the researchers first to think what to do with the data and then how to
collect, store and process them, etc.
Furthermore, a planning in data treatment is important for addressing timely
security,
privacy and ethical aspects. This way the research data are kept in track in
cases of possible staff or other changes. The DMP can also increase
preparedness for possible data requests. In short, planned activities, such as
implementation of well-designed DMP, stand a better chance of meeting their
goals than unplanned ones.
**1.2 Intended readership**
This deliverable is intended for use internally in the project only and
provides guidance on data management to the project partners and participants.
It is particularly relevant for partners responsible for data collection and
pilots. It is a snapshot of the DMP at the current stage; however, the DMP
will evolve throughout the project as new procedures etc. are added or
existing ones are changed.
The process of planning is also a process of communication, increasingly
important in a multi-partner research. The characteristics of collaboration
should be accordingly harmonised among project partners from different
organisations or different countries. The DMP also provides an ideal
opportunity to engender best practice with regards to e.g. file formats,
metadata standards, storage and risk management practices, leading to greater
longevity and sustainability of data and higher quality standards.
Ultimately, the DMP should engage researchers in conversations with those
providing the services (e.g. water utilities). In this context, the DMP
becomes a document in accordance with relevant standards and community best
practice.
**1.3 Structure of this document**
This deliverable is structured as follows:
* Section 1 is the introductory chapter describing the main purposes of the DMP
* Section 2 describes the guiding principles for the overall data management of STOP-
IT.
* Section 3 presents the data sets to be gathered, processed and analysed, considering the H2020 DMP template [2]. For each data set, we will: (i) provide an identifier for the data set to be produced; (ii) provide the data set description; (iii) refer to standards and metadata; (iv) describe how data will be shared; and (v) describe the procedures for archiving and long-term preservation of the data.
* Section 4 describes how STOP-IT is aligned to the Horizon 2020 mandate on open access to publications.
**1.4 Relationship with other deliverables**
This document complements the following deliverables:
* D9.3 – Dissemination and Communication Plan
* D10.2 – Management of personal data D10.3 – Big Data approach and privacy
* D10.4 – how to deal with data not publicly available
This list of related deliverables may be expanded during the course of the
project.
**2 Guiding principles**
The legal requirement for open research data in the Grant Agreement is not
applicable. However, the legal requirements for publications should not
represent a risk for compromising the privacy of informants participating in
the different STOP-IT interviews, focus interview or pilot studies by
following ethical requirements (D7.4). This DMP assesses when and how data can
be shared within a sound research ethical framework, where directly or
indirectly identifiable information is not disclosed in the research process
at any stage (following guidelines specified in [1], [2]).
In addition, we will return to this in section 4. Below, in section 3, we
describe the data sets to be gathered and processed in STOP-IT, and the
procedures followed to ensure access to these data sets without violating the
privacy of informants taking part in the STOP-IT pilot studies.
Figure 1 illustrates the main points to ensure the project has open access to
research data and publications will be ensured in the project (adapted from
[1]).
**Figure 1: STOP-IT data sets and publications**
Finally, it is worth noting that open access to research data and publications
is important within the context of responsible research and innovation 1 .
Ensuring research data and publications can be openly and freely accessed
means that any relevant stakeholder can choose to cross-check and validate
whether research data are accurately and comprehensively reported and
analysed, and may also encourage re-use and re-mixing of data. A better
exploitation of research data has much to offer, also in terms of alleviating
the efforts required by study participants as well as researchers. Optimizing
sharing of research data could potentially imply less duplication of very
similar studies as previously collected data sets may be used at least as
additional sources of data in new projects. Again, we emphasize that open
access to research data must comply with sound research ethics, ensuring no
directly or indirectly identifiable information is revealed.
**3 Data sets to be gathered and processed in STOP-IT**
In this chapter we describe the different data sets that will be gathered and
processed by the STOP-IT-partners. These descriptions follow the template
developed within the project. It will be updated by the project-partners
responsible for the different pilots to be conducted. The data sets follow
many of the same procedures, e.g., with regard to using the project
collaboration tool as a data repository. This means the same wording is often
repeated in the different data sets. As each data set description should give
a comprehensive overview of the gathering, processing and open access
archiving of data, we assessed it as necessary to repeat the procedures in the
different data set descriptions. The name for each data set includes a prefix
"DS" for data set, followed by a case-study identification number, the partner
responsible for collecting and processing the data, as well as a short title.
The template requires that information about a data set is provided. We have
primarily based the outlining of how and what data will be created on the
guidelines provided by the European University Institute [3].
Table 1 gives a preliminary overview of the datasets to be collected. The
descriptions of each data set, following the STOP-IT template, will be
provided in the following sections in later versions of this deliverable. The
STOP-IT dataset template is provided in Annex A.
# Table 1: Overview of data sets
<table>
<tr>
<th>
**No.**
</th>
<th>
**Identifier/Name**
</th>
<th>
**Brief description**
</th>
<th>
**Dissemination**
</th> </tr>
<tr>
<td>
1
</td>
<td>
Threats/Events
</td>
<td>
Includes the physical and cyber threads or events in the water utilities
infrastructure
</td>
<td>
CO
</td> </tr>
<tr>
<td>
2
</td>
<td>
Risks
</td>
<td>
Refers to the RIDB (task 3.2)
</td>
<td>
CO
</td> </tr>
<tr>
<td>
3
</td>
<td>
Reduction Measures
</td>
<td>
Refers to the RRM (task 4.3)
</td>
<td>
PU
</td> </tr>
<tr>
<td>
4
</td>
<td>
Population
</td>
<td>
Refers to the population data for each FR water utility
</td>
<td>
PU
</td> </tr>
<tr>
<td>
5
</td>
<td>
Supply
</td>
<td>
Includes the data of the supply for the pilot demos
</td>
<td>
PU
</td> </tr>
<tr>
<td>
6
</td>
<td>
Demand
</td>
<td>
Includes the data of the demand for the pilot demos
</td>
<td>
PU
</td> </tr>
<tr>
<td>
7
</td>
<td>
Critical Infrastructure
</td>
<td>
Refers to the existing critical
infrastructure for each FR water utility
</td>
<td>
CO
</td> </tr>
<tr>
<td>
8
</td>
<td>
Vulnerable assets
</td>
<td>
Refers to the list of vulnerable assets
</td>
<td>
CO
</td> </tr> </table>
**3.1 Data discoverability (making data Findable)**
The data will be stored, by each responsible partner, following their internal
policy guidelines and providing metadata making them discoverable. For
example, for the performance of the machine learning algorithms the timestamps
by the data originator or as otherwise agreed should be known. The details are
not yet available and should be defined in the next few months with the
partners who will be responsible for the specific procedures (e.g., machine
learning).
**3.2 Data sharing**
**Access procedures:** Public datasets are available from
_https://www.innovationplace.eu/_ .
**Document format and availability** : The datasets will be available as PDF/A
at https://www.innovationplace.eu/. From here the fully anonymized data are
accessible to anyone within the project, free of charge.
The dataset will be as open as possible and as closed as necessary, according
to the EC regulations. To this goal, the Zenodo open data repository for the
STOP-IT project will be developed. It is funded by the EC (via the OpenAIRE
projects), CERN and donations, hosted by CERN, embedded in its IT department.
This repository allows for the deposition of all kinds of digital content:
publications, data, software, multimedia etc. All published data will be
provided with a Digital Object Identifier (DOI): Once published, datasets
remain fixed over their lifetime, while the metadata can change.
The data will be uploaded to zenodo.org in M36 of STOP-IT's project period.
Before uploading datasets, we will first have to anonymize data. We anonymize
data after each interview. Nevertheless, we will further verify that data is
anonymized in the final month of the project.
**3.3 Data Interoperability**
The final architecture and the way the STOP-IT platform will be connected with
the various technological components of the project, as well as the legacy
systems was not finalized at the time of writing this deliverable. Different
options will be investigated in order to ensure interoperability and at the
same time to serve the objectives of the project.
In general, the STOP-IT approach (both in terms of internal design but also in
terms of output and exchanged data with other systems) amounts to the usage of
a few wellestablished interfaces, where feasible; limiting the complexity
generated by multiple types of interfaces (and gateways) is considered by the
consortium a key-point in achieving data interoperability.
Taking into account the early stage of the project and the fact that several
implementation choices are subject to update, we discuss in more details how
STOP-IT seeks to provide a data-interoperable solution:
**3.4 Archiving and preservation (including storage and backup)**
Archiving of the anonymized dataset at zenodo.org guarantees a long-term and
secure preservation of the data at no additional cost for the project. The
STOP-IT repository can be reached at _https://zenodo.org/communities/stop-it-
project. Every partner will be responsible_ _for uploading open data that they
own. Open data will be accessible to any and all third_ _parties through_
Zenodo.
**4 Open access to publications**
Any publications from STOP-IT must be available as open access (as far as
practicable possible). Open access to publications can be ensured either by
publishing in Gold open access journals or Green open access journals. As part
of the dissemination and publication plan in WP9, STOP-IT relevant journals
will be reviewed, and an initial list of relevant journal will be presented in
D9.3.
Gold open access means the article is available as open access by the
scientific publisher. Some journals require an author-processing fee for
publishing open access.
Green open access or self-archiving means that the published article or the
final peerreviewed manuscript is archived by the researcher in an online
repository (e.g., on InnovationPlace), in most cases after its publication.
Most journals within the social sciences domains require authors to delay
self-archiving to repositories to 12 months after the article first being
published.
In the STOP-IT project, author-publishing fees for gold open access journals
can be reimbursed within the project period and budget. There are, however, a
very good selection of relevant gold open access and green open access
journals available that do not charge author processing fees. Scholarly
publication can take a very long time, and final acceptance of all submitted
manuscripts may not occur before the end of the STOP-IT project. For these
reasons, we will prioritize to submit our work to gold open access journals
without author processing fees, or to green open access journals.
Project members will maintain an updated list of relevant journals in the
project internal collaboration tool.
**5 Code of Conduct**
Finally, and most important is the need to define which possible uses are
allowed on the data pool built from the project. The data usage can be managed
by defining specific purposes, establishing ethical guidelines or generating
Codes of Conduct. Most of the existing codes of conduct emerged in the area of
data analytics tackle these similar topics:
* Don't break the law.
* Be professional. Make sure you are up to date with all analytical techniques to avoid using the wrong or suboptimal style of analytics.
* Work scientifically, and don't be goal directed (working toward a certain desired result). Be prepared to reject every hypothesis.
* Respect the points of view of other professionals. Seek review from others to improve analytical results.
* As a hygiene factor, take care of security.
The STOP-IT project will implement a specific Code of Conduct particularised
to the use cases and data sources to be integrated, but that will be initially
based on the Data Science Code of Professional Conduct established by the Data
Science Association [5].
An explicit commitment of all data users will be to notify any possible re-
identification of a specific data subject based on the combination of the data
provided by different operators.
**6 Data Exchange with third countries**
The ethical standards and guidelines of Horizon2020 will be rigorously
applied, regardless of the country in which the research is carried out.
The STOP-IT partners that are not EU members are located in Norway and Israel.
According to Commission decisions, personal data can flow to and from Norway
without any safeguards being necessary [6]. The Commission has also recognized
**Israel** as providing adequate protection.
Furthermore, no raw data will be exported from the country in which it is
collected, also due to the internal security regulations at the participating
water network operators. Only suitably anonymized and aggregated data will be
exported, in accordance with the rules laid out by the STOP-IT security board.
SINTEF will take the role as data controller ensuring that:
* Personal Data will be processed legally and fairly;
* Data will be collected for explicit and legitimate purposes and used accordingly;
* Data will be adequate, relevant and not excessive in relation to the purposes for which it is collected and processed;
* Data will be accurate, and updated where necessary;
* Data subjects can rectify, remove or block incorrect data about themselves;
* Data that identifies individuals (personal data) will not be kept any longer than necessary;
* We will protect personal data against accidental or lawful destruction, loss, alteration and disclosure, particularly when processing involves data transmission over networks. Appropriate security measures will be implemented.
Protection measures will ensure a level of protection appropriate to the data.
The H2020 program has made available guidelines to help participants to
complete the ethics SelfAssessment in their proposals [7]. According to that
document (see Pages 19 and 20) the data transfer within EU/EEA countries is
not subject to specific requirements (i.e., specific authorizations or other
restrictions), but only needs to comply with the general requirements of
Directive 95/46/EC. In addition, for data transfer to third countries on the
“Commission list of countries offering adequate protection” no additional
requirements are solicited, and thus the data transfer from EU to them
responds to the referred EU Directive.
Only partners from Norway as EEA member and Israel as Associated Countries
participate in STOP-IT. Accordingly, the Directive allows personal data flow
without any further safeguard being necessary to Norway (as EEA member
country) and Israel (belongs to the “Commission list of countries offering
adequate protection”). Therefore, the consortium does not need to take any
further action for transferring data to Norway and Israel beyond the
application of Directive 95/46/EC.
**7 Confidential deliverables**
In addition to the deliverables classified as RESTREINT EU /EU RESTRICTED, the
following deliverables are deemed to be Confidential, and will only be
disseminated to the members of the consortium (including the Commission
Services).
# Table 2: Project Confidential Deliverables
<table>
<tr>
<th>
**Deliverable**
**Number**
</th>
<th>
**Deliverable**
**Title**
</th>
<th>
**WP**
**number**
</th>
<th>
**Lead beneficiary**
</th>
<th>
**Type**
</th>
<th>
**Dissemination level**
</th>
<th>
**Due**
**Date (in months)**
</th> </tr>
<tr>
<td>
D3.1
</td>
<td>
Establishing the context
for risk
assessment in FR water utility
</td>
<td>
WP3
</td>
<td>
3 - CET
</td>
<td>
Report
</td>
<td>
Confidential, only for
members
of the consortium (including the
Commission
Services)
</td>
<td>
9
</td> </tr>
<tr>
<td>
D3.3
</td>
<td>
Definition of end-user requirements in FR
water utility.
</td>
<td>
WP3
</td>
<td>
3 - CET
</td>
<td>
Report
</td>
<td>
Confidential, only for
members
of the consortium (including the
Commission
Services)
</td>
<td>
12
</td> </tr>
<tr>
<td>
D7.3
</td>
<td>
Integration
report per demo
site
</td>
<td>
WP7
</td>
<td>
12 - ICCS
</td>
<td>
Report
</td>
<td>
Confidential, only for
members of the consortium (including the
Commission
Services)
</td>
<td>
32
</td> </tr>
<tr>
<td>
D9.4
</td>
<td>
Final
Exploitation and business plan
</td>
<td>
WP9
</td>
<td>
16 - PNO
</td>
<td>
Report
</td>
<td>
Confidential, only for
members of the consortium (including the
Commission
Services)
</td>
<td>
48
</td> </tr>
<tr>
<td>
D10.1
</td>
<td>
Data protection ethics approval (H – Requirement No. 2 )
</td>
<td>
WP10
</td>
<td>
1 - SINTEF
</td>
<td>
Ethics
</td>
<td>
Confidential, only for
members of the consortium (including the
Commission
Services)
</td>
<td>
30
</td> </tr>
<tr>
<td>
D10.2
</td>
<td>
Procedures for data collection, storage, protection, retention and destruction
(POPD –
Requirement
No. 3 )
</td>
<td>
WP10
</td>
<td>
1 - SINTEF
</td>
<td>
Ethics
</td>
<td>
Confidential, only for
members of the consortium (including the
Commission
Services)
</td>
<td>
6
</td> </tr>
<tr>
<td>
D10.3
</td>
<td>
Considerations on personal data in the big data approach
(POPD –
Requirement
No. 4)
</td>
<td>
WP10
</td>
<td>
1 – SINTEF
</td>
<td>
Ethics
</td>
<td>
Confidential, only for
members of the consortium (including the
Commission
Services)
</td>
<td>
6
</td> </tr>
<tr>
<td>
D10.4
</td>
<td>
Authorisations for use of data not publicly
available
(POPD – Requirement
No. 6 )
</td>
<td>
WP10
</td>
<td>
1 - SINTEF
</td>
<td>
Ethics
</td>
<td>
Confidential, only for
members of the consortium (including the
Commission
Services)
</td>
<td>
6
</td> </tr>
<tr>
<td>
D10.5
</td>
<td>
Report from ethics advisor
(GEN – Requirement
No. 10)
</td>
<td>
WP10
</td>
<td>
1 - SINTEF
</td>
<td>
Ethics
</td>
<td>
Confidential, only for
members of the consortium (including the
Commission
Services)
</td>
<td>
12
</td> </tr> </table>
**8 Classified data**
The STOP-IT project does not envision using classified data as input, but a
number of
STOP-IT deliverables have been flagged as classified (RESTREINT UE / EU
RESTRICTED) for reasons of aggregation. Any classified deliverables will be
handled in accordance with EU rules.
The evaluation process concluded that the following STOP-IT deliverables shall
be classified RESTREINT UE / EU RESTRICTED:
# Table 3: Provisional EU RESTRICTED deliverables
<table>
<tr>
<th>
D2.3 – Best Practice guidelines for CoPs in Water Infrastructure Protection
(M48)
</th> </tr>
<tr>
<td>
D4.1 – Asset vulnerability assessment Tool (M12)
</td> </tr>
<tr>
<td>
D5.1 – Secure Wireless Sensor Communications Module (M21)
</td> </tr>
<tr>
<td>
D5.2 – IT and SCADA Security technologies (M18)
</td> </tr>
<tr>
<td>
D5.3 – Physical threats protection technologies (M18)
</td> </tr>
<tr>
<td>
D5.7 – Real-Time anomaly detection system (M24)
</td> </tr>
<tr>
<td>
D6.1 – STOP-IT architecture and STOP-IT framework design (M11)
</td> </tr>
<tr>
<td>
D6.5 – Integrated framework- version 1 (M29)
</td> </tr>
<tr>
<td>
D6.5 – Integrated framework- version 2 (M36)
</td> </tr>
<tr>
<td>
D7.1 – Pilot Plans and Report for the demo preparations (M18)
</td> </tr>
<tr>
<td>
D7.4 – Risk Management plan (draft M36) (M48)
</td> </tr>
<tr>
<td>
D7.5 – Lessons Learned, societal impact and STOP-IT adoption Roadmap (M48)
</td> </tr>
<tr>
<td>
D9.5 – STOP-IT adaptation roadmap (M48)
</td> </tr> </table>
The consortium does not agree with this classification for all deliverables,
so a process for declassification of selected deliverables will be initiated
by M12. In the meantime, all deliverables listed in Table 3 must be treated as
outlined below.
**8.1 Production of classified deliverables**
All classified deliverables must be treated as RESTREINT UE / EU RESTRICTED,
also as work-in-progress (see section 8.3). Please refer to the "Guidelines
for the classification of information in research projects" [9], and note the
following:
* Do NOT prioritise any threats described in any deliverable
* Do NOT perform any criticality analyses, vulnerability modelling of supply systems or provide any information on vulnerability assessment methodologies without anonymizing the water operators involved
* Do NOT provide any information on the performance of systems installed in water infrastructures without anonymizing the water operators involved
* Do NOT perform any in-depth quantitative analyses of the potential or actual consequences of attacks against water infrastructures without anonymizing the water operators involved (if in doubt consult the SAB regarding acceptable depth).
2. **Quality Assurance**
The Quality Assurance process as outlined in the Quality Assurance Plan (D1.2)
will also apply to classified deliverables, but only partners with a specified
need to know (as outlined in the Grant Agreement) will be involved in the QA
process. Partners involved with the QA process for classified deliverables
will need to have access to a computer approved for RESTREINT UE / EU
RESTRICTED (see below).
3. **Handling of classified deliverables**
All classified deliverables must be treated according to RESTREINT UE / EU
RESTRICTED handling rules, also when they are works-in-progress.
Since STOP-IT partners do not have access to classified computer networks,
each partner involved with production of classified deliverables will need to
procure (at least) one computer approved for RESTREINT UE / EU RESTRICTED, as
described in national guidelines for securing stand-alone computer systems
[8].
1. **Marking and storage of paper copies and equipment**
**Figure 2: Marking of classified documents**
All equipment used to create, edit or view information classified as RESTREINT
UE / EU RESTRICTED shall be stored in the same manner as documents classified
as RESTREINT UE / EU RESTRICTED.
Documents classified as RESTREINT UE / EU RESTRICTED shall be clearly marked
top and bottom, as shown in Figure 2. When not in use by authorized personnel,
documents classified as RESTREINT UE / EU RESTRICTED shall be locked down in a
filing cabinet or similar that satisfies national rules for storing documents
of equivalent classification of
RESTREINT UE / EU RESTRICTED.
2. **Sending classified deliverables**
Hardcopies of classified deliverables must be placed in an addressed envelope
sealed with security tape, with classification markings top and bottom as
illustrated in Figure 3 and Figure 4, and this envelope should be placed in an
opaque, unmarked envelope, and sent to the addressee using registered mail
(Figure 5).
**Figure 3: Envelope sealed with security tape**
**Figure 4: Sealed and marked inner envelope placed in outer envelope**
**Figure 5: Outer envelope with address but no classification markings**
3. **Requirements for computer equipment**
Recommendations for stand-alone computer equipment include at least the
following (national authorities may have additional requirements):
* Not connected to network
* No wireless connections
* No built-in camera or microphone
* Full disk encryption
* No network printers
* 64-bit processor
* BIOS password
* The equipment must be labelled "RESTREINT UE / EU RESTRICTED"
No private PCs may be used for handling RESTREINT UE / EU RESTRICTED
information.
Stand-alone PCs should run Windows 7 Professional or better.
4. **Encryption software**
Information classified RESTREINT UE / EU RESTRICTED may be encrypted by an
approved encryption system, and the encrypted data may be sent over an
unsecure network. A list of approved software solutions is available from EU
[10]. STOP-IT has chosen the off-line encryption tool Zed! [11] for use within
the project. It is relatively inexpensive, has EAL3+ evaluation, and also has
the option of downloading a free readonly version, implying that only partners
that need to send RESTREINT UE / EU RESTRICTED have to purchase the
professional version.
5. **Responsibility**
The adherance to these rules is the responsibility of
* The work package leader (of the relevant WPs producing or reading classified deliverables)
* The partner responsible for each classified deliverable
* The Project Steering Board (PSB) member for the partners involved in producing or reading each classified deliverable
It is this up to the abovementioned roles to ensure that each individual
involved follows the rules.
<table>
<tr>
<th>
Production of classified Foreground information
</th> </tr>
<tr>
<td>
Subject
</td>
<td>
Classification Level
</td>
<td>
Beneficiary involved in the production or wanting to access
</td> </tr>
<tr>
<td>
Name
</td>
<td>
Responsibility
</td>
<td>
Date of production
</td>
<td>
Comments including purpose of the access and planned use
</td> </tr>
<tr>
<td>
D2.3 - Best Practice guidelines for
CoPs in
Water
Infrastructure
Protection
</td>
<td>
RESTREINT UE / EU RESTRICTED
</td>
<td>
KWR
IWW
SINTEF
TECHN PNO
CET
EUT
RISA
ICCS
AB
VAV
BWB
MEK
</td>
<td>
Security
Manager/Main contributor Contributor
Contributor
Contributor
Contributor
Reader only
Reader only
Reader only
Reader only
Reader only
Reader only
Reader only
Reader only
</td>
<td>
M48
</td>
<td>
</td> </tr>
<tr>
<td>
D4.1 - Asset vulnerability assessment
tool
</td>
<td>
RESTREINT UE / EU RESTRICTED
</td>
<td>
TECHN
SINTEF
IWW MEK
ICCS
KWR
CET
AB
EUT
ATOS
WS
RISA
VAV
BWB
</td>
<td>
Security
Manager/Main contributor Contributor
Contributor
Contributor
Contributor
Reader only
Reader only
Reader only
Reader only
Reader only
Reader only
Reader only
Reader only
Reader only
</td>
<td>
M12
</td>
<td>
as input to D4.2 as input to D4.2 as input to D4.2
as input to D4.5 as input to D6.5 and D6.6 as input to D6.5 and D6.6 as input
to D6.5 and D6.6
</td> </tr> </table>
**8.4 Access to classified data**
# Table 4: Access to classified deliverables
<table>
<tr>
<th>
Production of classified Foreground information
</th> </tr>
<tr>
<td>
Subject
</td>
<td>
Classification Level
</td>
<td>
Beneficiary involved in the production or wanting to access
</td> </tr>
<tr>
<td>
Name
</td>
<td>
Responsibility
</td>
<td>
Date of production
</td>
<td>
Comments including purpose of the access and planned use
</td> </tr>
<tr>
<td>
D5.1 - Secure
Wireless
Sensor Communicati ons Module
</td>
<td>
RESTREINT UE / EU RESTRICTED
</td>
<td>
WS
EUT
RISA
ATOS
ICCS
SINTEF
IWW
CET
KWR
AB
VAV
MEK
BWB
PNO
</td>
<td>
Security manager/main contributor Contributor
Reader only
Reader only
Reader only
Reader only
Reader only
Reader only
Reader only
Reader only
Reader only
Reader only
Reader only
Reader only
</td>
<td>
M21
</td>
<td>
as input to D6.5 and D6.6 as input to D6.5 and D6.6
as input to D6.5 and D6.6
</td> </tr>
<tr>
<td>
D5.2 -IT and
SCADA
Security technologies
</td>
<td>
RESTREINT UE / EU
RESTRICTED
</td>
<td>
TECHN
SINTEF
CET
EUT ATOS mnem
RISA
ICCS
WS
IWW
KWR
</td>
<td>
Security
Manager/Main contributor Contributor
Contributor
Contributor
Contributor
Contributor
Reader only
Reader only
Reader only
Reader only
Reader only
</td>
<td>
M18
</td>
<td>
as input to D6.5 and D6.6 as input to D6.5 and D6.6
as input to D6.5
</td> </tr> </table>
<table>
<tr>
<th>
Production of classified Foreground information
</th> </tr>
<tr>
<td>
Subject
</td>
<td>
Classification Level
</td>
<td>
Beneficiary involved in the production or wanting to access
</td> </tr>
<tr>
<td>
Name
</td>
<td>
Responsibility
</td>
<td>
Date of production
</td>
<td>
Comments including purpose of the access and planned use
</td> </tr>
<tr>
<td>
D5.3 - Physical threats protection technologies
</td>
<td>
RESTREINT UE / EU
RESTRICTED
</td>
<td>
ICCS
CET
ATOS
MEK
AB
Apl
EUT
RISA
WS
SINTEF
IWW
KWR
</td>
<td>
Security
Manager/Main contributor Contributor
Contributor
Contributor
Contributor
Contributor
Reader only
Reader only
Reader only
Reader only
Reader only
Reader only
</td>
<td>
M18
</td>
<td>
as input to D6.5 and D6.6 as input to develop D6.5 and D6.6 as input to
develop D6.5
</td> </tr>
<tr>
<td>
D5.7 - RealTime anomaly detection system
</td>
<td>
RESTREINT UE / EU RESTRICTED
</td>
<td>
EUT
CET
TECHN
ATOS
AB
ICCS WS mnem
RISA
SINTEF
IWW
KWR
VAV
BWB
MEK PNO
</td>
<td>
Security
Manager/Main contributor Contributor
Contributor
Contributor
Contributor
Contributor
Contributor
Contributor
Reader only
Reader only
Reader only
Reader only
Reader only
Reader only
Reader only
Reader only
</td>
<td>
M24
</td>
<td>
as input to develop D6.5 and D6.6
</td> </tr> </table>
<table>
<tr>
<th>
Production of classified Foreground information
</th> </tr>
<tr>
<td>
Subject
</td>
<td>
Classification Level
</td>
<td>
Beneficiary involved in the production or wanting to access
</td> </tr>
<tr>
<td>
Name
</td>
<td>
Responsibility
</td>
<td>
Date of production
</td>
<td>
Comments including purpose of the access and planned use
</td> </tr>
<tr>
<td>
D6.1- STOP-
IT architecture and STOP-IT framework design
</td>
<td>
RESTREINT UE / EU RESTRICTED
</td>
<td>
ATOS
CET
EUT
ICCS
RISA
WS
SINTEF
IWW
KWR
AB
VAV
BWB
MEK
</td>
<td>
Security
Manager/Main contributor Contributor
Contributor
Contributor
Contributor
Reader only
Reader only
Reader only
Reader only
Reader only
Reader only
Reader only
Reader only
</td>
<td>
M11
</td>
<td>
as input to develop D6.5
</td> </tr>
<tr>
<td>
D6.5 - Integrated framework- version 1
</td>
<td>
RESTREINT UE / EU RESTRICTED
</td>
<td>
RISA
EUT
ATOS
ICCS
WS
mnem
Apl
CET
SINTEF
IWW
KWR
AB
VAV
MEK
BWB
</td>
<td>
Security
Manager/Main contributor Contributor
Contributor
Contributor
Contributor
Reader only
Reader only
Reader only
Reader only
Reader only
Reader only
Reader only
Reader only
Reader only
Reader only
</td>
<td>
M29
</td>
<td>
as input to develop D6.6
to be used in WP7 to be used in WP7 to be used in WP7 to be used in WP7
</td> </tr> </table>
<table>
<tr>
<th>
Production of classified Foreground information
</th> </tr>
<tr>
<td>
Subject
</td>
<td>
Classification Level
</td>
<td>
Beneficiary involved in the production or wanting to access
</td> </tr>
<tr>
<td>
Name
</td>
<td>
Responsibility
</td>
<td>
Date of production
</td>
<td>
Comments including purpose of the access and planned use
</td> </tr>
<tr>
<td>
D6.6 - Integrated framework- version 2
</td>
<td>
RESTREINT UE / EU RESTRICTED
</td>
<td>
EUT
CET
ATOS
ICCS
RISA
WS mnem
Ap
PNOI
SINTEF
IWW
KWR
AB
VAV
MEK
BWB
</td>
<td>
Security
Manager/Main contributor Contributor
Contributor
Contributor
Contributor
Reader only
Reader only
Reader only
Reader only
Reader only
Reader only
Reader only
Reader only
Reader only
Reader only
Reader only
</td>
<td>
M36
</td>
<td>
to be used in WP7 to be used in WP7 to be used in WP7 to be used in WP7
</td> </tr> </table>
<table>
<tr>
<th>
Production of classified Foreground information
</th> </tr>
<tr>
<td>
Subject
</td>
<td>
Classification Level
</td>
<td>
Beneficiary involved in the production or wanting to access
</td> </tr>
<tr>
<td>
Name
</td>
<td>
Responsibility
</td>
<td>
Date of production
</td>
<td>
Comments including purpose of the access and planned use
</td> </tr>
<tr>
<td>
D7.1 - Pilot
Plans and Report for the demo
preparations
</td>
<td>
RESTREINT UE / EU RESTRICTED
</td>
<td>
ICCS
SINTEF
IWW
CET
EUT
TECHN
ATOS
MEK
AB
VAV
BWB
RISA
KWR
Apl
WS mnem
</td>
<td>
Security
Manager/Main contributor Contributor
Contributor
Contributor
Contributor
Contributor
Contributor
Contributor
Contributor
Contributor
Contributor
Contributor
Reader only
Reader only
Reader only
Reader only
</td>
<td>
M18
</td>
<td>
as input to T7.2 as input to T7.3 as input to T7.3 as input to T7.4
</td> </tr> </table>
<table>
<tr>
<th>
Production of classified Foreground information
</th> </tr>
<tr>
<td>
Subject
</td>
<td>
Classification Level
</td>
<td>
Beneficiary involved in the production or wanting to access
</td> </tr>
<tr>
<td>
Name
</td>
<td>
Responsibility
</td>
<td>
Date of production
</td>
<td>
Comments including purpose of the access and planned use
</td> </tr>
<tr>
<td>
D7.4 - Risk Management
plan (draft
M36)
</td>
<td>
RESTREINT UE / EU RESTRICTED
</td>
<td>
CET
VAV
BWB
MEK
AB
SINTEF
IWW
EUT
TECHN
ATOS
ICCS
Apl
WS RISA mnem
KWR
</td>
<td>
Security
Manager/Main contributor Contributor
Contributor
Contributor
Contributor
Contributor
Contributor
Contributor
Contributor
Contributor
Contributor
Contributor
Contributor
Contributor
Contributor
Reader only
</td>
<td>
M48
</td>
<td>
</td> </tr>
<tr>
<td>
D7.5 - Lessons Learned, societal impact and STOP-IT adoption roadmap
</td>
<td>
RESTREINT UE / EU
RESTRICTED
</td>
<td>
CET
SINTEF
IWW MEK
AB
VAV
ICCS
BWB
RISA
KWR
EUT
PNO
</td>
<td>
Security
Manager/Main contributor Contributor
Contributor
Contributor
Contributor
Contributor
Contributor
Contributor
Contributor
Reader only
Reader only
Reader only
</td>
<td>
M48
</td>
<td>
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
Production of classified Foreground information
</th> </tr>
<tr>
<td>
Subject
</td>
<td>
Classification Level
</td>
<td>
Beneficiary involved in the production or wanting to access
</td> </tr>
<tr>
<td>
Name
</td>
<td>
Responsibility
</td>
<td>
Date of production
</td>
<td>
Comments including purpose of the access and planned use
</td> </tr>
<tr>
<td>
D9.5 - STOPIT adaptation roadmap
</td>
<td>
RESTREINT UE / EU RESTRICTED
</td>
<td>
WssTP
PNO BK
EMA
HW
DeW
TECHN
ATOS
Apl
WS mnem SINTEF
IWW
CET
KWR EUT
RISA
ICCS
VAV
BWB
AB
MEK
</td>
<td>
Security
Manager/Main contributor Contributor
Contributor
Contributor
Contributor
Contributor
Reader only
Reader only
Reader only
Reader only
Reader only
Reader only
Reader only
Reader only
Reader only
Reader only
Reader only
Reader only
Reader only
Reader only
Reader only
Reader only
</td>
<td>
M48
</td>
<td>
</td> </tr> </table>
**9 Conclusions**
Since the DMP is expected to mature during the project, more developed
versions of the plan will be produced at later stages (in M12, M18, M24, M36
and M48). In this initial DMP we have described the requirements imposed on
STOP-IT with regard to access to research data and open access to
publications. The project partners have decided to use a combination of
collaborative web-site and zenodo.org as the open project and publication
repository, and to link the repository to a STOP-IT project site.
Chapter 3, which describes the data sets, is the most important part of the
DMP. These descriptions will likely need to be revised to provide updated
versions as the STOP-IT project evolves. We believe this is required as the
DMP should be a living document. Although we have attempted to take into
consideration the data management life cycle for the data sets to be collected
and processed by STOP-IT, it is very likely that additions and changes may be
needed.
This DMP acknowledges the importance of STOP-IT-relevant open access journals,
with an emphasis on gold open access journals. Project members will maintain
an updated list of relevant journals in the project´s internal collaboration
system. For each planned publication we will consider which journals will be
the most appropriate first choices for publication.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1259_FORTIKA_740690.md
|
**Executive Summary**
This document presents the 1 st version of the project’s Data Management
Plan (DMP). This regards, the definition of the DMP as of M6, while the final
version of it will be update during the project’s lifetime and submitted in
M36.
The DMP provides an analysis of the various datasets that will be produced by
the project and the main elements of the data management policy that will be
used by the beneficiaries, in respect with all these datasets that will be
generated by the project. This version of the document reflects the current
state of the datasets paving the way for further updates during the lifecycle
of the project.
In particular, within this document the methods & conventions, as well as the
recommendations for categorising about the use, manipulation and inclusion of
data sets in the FORTIKA project. Moreover, it refers to regulatory aspects
and operational information related to contact, personnel profiles details and
about the ownership of the data within the project. Finally, it serves as a
guide for the participants of the EU funded FORTIKA Project about the data
lifecycle with respect to the creation, identification, caption and
description, storage, preservation (including security and privacy),
accessibility, discovery and analysis, re-use and transformation of data in
the context of the different deployment sites.
Last but not least, the sections related to intellectual property rights (IPR)
are included with the objective to identify the significant aspects of the
project. Subsequently, the contents of the DMP are in fully accordance with
the signed Grant Agreement of the FORTIKA project with respect to EC Horizons
2020 recommendations, while the information provided will define common
grounds in relation to management of the data. In this respect, it is expected
to motivate the participation and the collaboration within the FORTIKA
consortium partners and amongst external partners or participants in the
project.
# Introduction
## Project overview
FORTIKA project aims to minimize the exposure of small and medium sized
enterprises (SMEs) to cyber security risks and threats and help them respond
to cyber security incidents. The project aims to relieve SMEs from unnecessary
and costly efforts of identifying, acquiring and using the appropriate cyber
security solutions. In order to achieve these goals, FORTIKA aims to provide
SMEs with a smart and robust hardware security layer (i.e. an FPGA accelerator
able to process data traffic) enhanced with an adaptive security service
management ecosystem (marketplace providing appropriate cyber security
solutions).
## Data management plan purpose
A DMP describes the data management life cycle for the data to be collected,
processed and/or generated by the FORTIKA Horizon 2020 project. Data
Management Plans (DMPs) are a key element of good data manipulation and
exchange.
More specifically, the DMP shall be generated based on the EU Commission
guidelines regarding the management of data requirements coming from projects
funded by the H2020. According to these guidelines, the data that is going to
be shared for scientific, experimental and commercial purposes should be
easily discoverable, accessible, assessable and intelligible.
Thus, the purpose of this deliverable (D1.4 - Data Management Plan) is to
provide an analysis of the main elements of the data management policy that
will be used by the consortium with regard to all the datasets that will be
generated and/or collected by the project consortium.
## EU Commission guidelines for data management
Basic guidelines 1 for data management plans are presented in Table 1. These
guidelines were published by the EU Commission for appropriate data management
plans in Horizon 2020 projects.
<table>
<tr>
<th>
**DMP Component**
</th>
<th>
**Issues to be addressed**
</th> </tr>
<tr>
<td>
Data Summary
</td>
<td>
* State the purpose of the data collection/generation
* Explain the relation to the objectives of the project
* Specify the types and formats of data generated/collected
* Specify if existing data is being re-used (if any)
* Specify the origin of the data
* State the expected size of the data (if known)
* Outline the data utility: to whom will it be useful
</td> </tr>
<tr>
<td>
FAIR Data. Making data findable, including provisions for metadata.
</td>
<td>
* Outline the discoverability of data (metadata provision)
* Outline the identifiability of data and refer to standard identification mechanism. Do you make use of persistent and unique identifiers such as Digital Object Identifiers?
* Outline naming conventions used
* Outline the approach towards search keyword
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
* Outline the approach for clear versioning
* Specify standards for metadata creation (if any). If there are no standards in your discipline describe what type of metadata will be created and how
</th> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
* Specify which data will be made openly available? If some data is kept closed provide rationale for doing so
* Specify how the data will be made available
* Specify what methods or software tools are needed to access the data? Is documentation about the software needed to access the data included? Is it possible to include the relevant software (e.g. in open source code)?
* Specify where the data and associated metadata, documentation and code are deposited – Specify how access will be provided in case there are any restrictions
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
* Assess the interoperability of your data. Specify what data and metadata vocabularies, standards or methodologies you will follow to facilitate interoperability.
* Specify whether you will be using standard vocabulary for all data types present in your data set, to allow inter-disciplinary interoperability? If not, will you provide mapping to more commonly used ontologies?
</td> </tr>
<tr>
<td>
Increase data reuse (through clarifying licences)
</td>
<td>
* Specify how the data will be licenced to permit the widest reuse possible
* Specify when the data will be made available for reuse. If applicable, specify why and for what period a data embargo is needed
* Specify whether the data produced and/or used in the project is useable by third parties, in particular after the end of the project? If the re-use of some data is restricted, explain why
* Describe data quality assurance processes
* Specify the length of time for which the data will remain re-usable
</td> </tr>
<tr>
<td>
Allocation of resources
</td>
<td>
* Estimate the costs for making your data FAIR. Describe how you intend to cover these costs
* Clearly identify responsibilities for data management in your project
* Describe costs and potential value of long term preservation
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
\- Address data recovery as well as secure storage and transfer of sensitive
data
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
– To be covered in the context of the ethics review, ethics section of DoA and
ethics deliverables. Include references and related technical aspects if not
covered by the former
</td> </tr>
<tr>
<td>
Other
</td>
<td>
– Refer to other national/funder/sectorial/departmental procedures for data
management that you are using (if any)
</td> </tr> </table>
## Approaches for personal data management
The EU guidelines referenced above do stand that focus is not only given on
personal data but that privacy concerns are also taken into account. However,
the only mention about personal data is limited to a question from an ethical
perspective and as follows: “Is informed consent for data sharing and long-
term preservation included in questionnaires dealing with personal data?” In
addition, under the rubric “Data Security”, the above table requires that the
transfer of sensitive data shall be addressed in data management plans.
In FORTIKA it is mandatory to have a clear strategy with regard to the
personal data and privacy protection. Thus, the project has investigated and
studied the data regulatory aspects in current European frameworks and follows
closely the new European General Data Protection Regulation (GDPR) process and
its outcomes that will enter in force on the 25 th of May 2018. The data
management plan of FORTIKA will be designed considering the coming
modification in the European GDPR.
## Data management roadmap
The DMP is a living document that will be constantly evolving during the
lifecycle of the project. The overall timeline defining the oncoming updates
of the DMP are drawn in Figure
1\.
**Figure 1: Time plan of the DMP evolution throughout the project’s lifetime**
The current document forms only the 1 st version of the DMP overviews the
current status of datasets and the specific conditions attached to them. An
updated version of the DMP will be defined by M18 and will be reported as an
extra section in D1.9 – Periodic Management Report. The final version of the
DMP will then be submitted in M36 as a stand-alone deliverable, i.e. D1.6 –
Data Management Plan (final version).
# General Principles
This DMP consists of eight basic elements, namely, 1) _data
creation/collection/development methods_ , 2) _metadata and standards_ , 3)
_legal, rights and ethical issues_ , 4) _data storage, 5) data preservation,
6) data access, 7) data discovery and analysis, and 8) data re-use and
transformation_ . In the remainder of this section, the general properties of
these elements are discussed as applied to all members of project’s
consortium.
## Data creation/collection/development methods
The project involves carrying out data collection (in the context of the
piloting and validation phase) and a set of large scale validation tests to
assess the technology and effectiveness of the proposed framework in real life
conditions. For this reason, human participants will be involved in certain
aspects of the project and data will be collected concerning their personal
info and the corresponding corporate intellectual property rights (IPR). Due
to the fact that the project will collect personal-related data, the
consortium will comply with any European and national legislation and
directives relevant to the country where the data collections are taking
place. To this end, personal -related data will be centrally stored. Also,
data will be scrambled where possible and abstracted in a way that will not
affect the final project outcome.
## Metadata & standards
The standards used in the project are strongly related to the specific
dataset, so it will be described in more detail in Section 3\.
In general, all datasets are stored in well-known data formats, no new format
is used. Moreover, regarding metadata, references to adopted standards are
made. However, for a few of datasets, the definition of metadata is an ongoing
task, which will be finalised in the following months. Hence, in the next
version of the deliverable full reference to adopted standards will be made.
## Legal, rights and ethical issues
Data management is necessarily connected to legal rights. A number of
perspectives may be applied in this regard. Data itself may be personal data
or other data. Personal data is further distinguished into sensitive data and
common data. All data may also have an economic value – notwithstanding
ownership of personal data theories and considerations.
Once the legal status of “data” under the FORTIKA project is established,
perspectives may again differ depending on the point of view. From the point
of view of rightsholders, “data management” means transfer of their rights, in
one way or another, to third parties in view of Project execution. From the
point of view of recipients of the data, “data management” means
implementation of an agreement (licensing or transfer) for the Project’s
purposes.
Even once data have been transmitted, again management issues arise, this time
of an internal nature within the organization of each Project partner.
Regardless of whether an agreement to this end has been entered within the
Consortium on a bilateral basis or not, Project partners need to apply certain
internal policies on the data they manage and process. The same is the case
even for data creators: although no bilateral agreement may yet be applicable
to them, Project requirements or general legal obligations may necessitate the
application of an internal data management policy.
Finally, the ethical perspective on data management ought not be overlooked.
Cases may arise when an otherwise lawful data management operation may not be
advisable to Project partners because of ethical reasons. Careful balancing
therefore needs to take place during the Project execution, so as for data
processing carried out during its term to comply both with applicable legal
and ethical standards.
In view of the above this analysis will highlight the many different
perspectives of data management under the FORTIKA project, in an attempt to
provide concrete guidance to Project partners.
### The legal status of data
A Data Management Plan, at least from a legal point of view, first and
foremost needs to elaborate on the legal status of its basic component term,
data. Data under the FORTIKA project may be both personal data and other
(unqualifiable from a legal perspective) data.
With regard to personal data, EU data protection law applies. According to the
EU General Data Protection Regulation (Regulation (EU) 2016/679), “’personal
data’ means any information relating to an identified or identifiable natural
person (‘data subject’); an identifiable natural person is one who can be
identified, directly or indirectly, in particular by reference to an
identifier such as a name, an identification number, location data, an online
identifier or to one or more factors specific to the physical, physiological,
genetic, mental, economic, cultural or social identity of that natural
person;” (Article 4(1).
The FORTIKA project has well taken this matter into account, and has dedicated
a WP, WP2, to this end. It will be under this WP that the relevant, complex,
legal issues will be tackled. Here, for picture completeness purposes, it is
enough to be noted that the GDPR, that will become directly applicable in all
of the EU on May 2018, forms the legal basis for personal data processing
under the FORTIKA project. In the event of use cases carried out under the
Project, the applicable laws of the EU Member States concerned will equally be
taken into account. In this regard, all applicable laws and regulations with
regard to processing of personal data will be adhered to.
As far as other, unqualifiable, data processed under the Project are concerned
(for example, technical data) their legal status may vary considerably.
Technical data, for example, may not be protected per se, but if included in a
database they may acquire the special legal protection afforded to databases
within the EU (Database Directive, Directive 96/9/EC). As per its wording,
“’database’ shall mean a collection of independent works, data or other
materials arranged in a systematic or methodical way and individually
accessible by electronic or other means” (Article 2). While perhaps not being
able to profit from copyright protection, all data collected during Project
execution and placed within a database may make use of the above provisions.
On the other hand, if data under the Project pertain to a computer program,
these benefit from specialized protection under the EU law (Software
Directive, Directive 2009/24/EC), which is broadly placed under the
intellectual property law system. According to its wording, in Recital 7, “for
the purpose of this Directive, the term ‘computer program’ shall include
programs in any form, including those which are incorporated into hardware.
This term also includes preparatory design work leading to the development of
a computer program provided that the nature of the preparatory work is such
that a computer program can result from it at a later stage”.
Even for data processed under the Project that are apparently left outside the
scope of any of the above legal systems, competition law may be relevant, as
last resort. It is in this regard that, apart from work carried out under WP2,
Project partners are invited to communicate with the legal partner with regard
to the ad hoc legal status of their datasets, and the legal rights pertaining
thereto.
### Data ownership
Notwithstanding the issue of whether ownership rights are applicable at all on
personal data, ownership of the data under the Project resides, under normal
circumstances, with their creator. This is, after all, confirmed, in Section 8
of the Consortium Agreement (on Results). The legal or natural person
generating the data is these data’s first rightsholder. Admittedly, this
statement makes the assumption that intellectual property rights apply on
these data (see, for example, Article 2 of the Computer Programs Directive).
The same is the case with the Database Directive: Data included in a database
bear property rights to the database’s creator. In all other cases,
competition law will most likely award property-like protection to the Project
partner generating the data.
In the event that the issue of data ownership is not addressed through the
applicable legal framework or contractual agreements already in place, Project
partners need to provide, within their respective organizations, accordingly.
Data transfers
Essentially being a collaboration project, FORTIKA involves a number of data
transfers among its project partners. These transfers ought to be examined
under the Data Management Plan as well, indeed from a legal perspective, in
view of their lawfulness and fairness, which will in turn contribute to
successful Project execution.
Data transfers under the FORTIKA Project need to be mapped and lawful. As far
as mapping is concerned, the relevant requirements are met within this
deliverable, in the sense that datasets are outlined herein and partners are
hereunder deciding, and agreeing, on current future data transfers among them.
It is however imperative that such transfers are well regulated as well.
Regulation may come in the form of either explicit mention in the Consortium
Agreement (for example, in Sections 8 and 9) or bilateral agreements between
the parties concerned. Access and use rights, as well as derivative products
or intellectual property exploitation (in the form, for example, of patents)
are all issues that need to be addressed in such agreements, if not addressed
already in the relevant Sections (8 and 9) of the Consortium Agreement.
Additional caution is advised in the event of transmissions of personal data.
In this case, apart from any contractual agreements, EU data protection law
applies (indeed, in the event of contradictions with contractual terms, it
supersedes them). The FORTIKA project has applied the necessary legal
procedures in view of lawful processing of data by data controllers (notably,
notification of competent DPAs). The details of these tasks will be described
under the Project’s D2.4 deliverable. In this case the principle of
accountability holds liable for any such personal data processing the
controller: as per Article 5.2 of the GDPR, “the controller shall be
responsible for, and be able to demonstrate compliance with, paragraph 1
(‘accountability’)”.
However, this addresses legal issues raised only within the same controller’s
organisation. If transmission of such data to other project partners is
required, this needs to take place under special contractual clauses and
arrangements. It is to this end that legal guidance will be provided to
Project partners during use cases execution, in order to resolve relevant
issues that may arise. At any event, the applicable legal framework, as well
as concrete, practical guidance in this regard, will be provided to all
project partners under WP2.
### Internal data management policies
Regardless whether project partners are the rightsholders or the recipients of
data processed under the Project they are required to apply internal data
management policies. These are best elaborated under other subchapters of this
chapter, for example on data collection, data storage, data re-use etc. Here
is enough to be noted that these policies need to comply with applicable legal
standards, relevant to the legal status of the data concerned. Project
partners are advised to make use of relevant legal principles, for example the
principles of proportionality and reciprocity, while drafting and implementing
their internal data management policies.
In the event that a general data management policy is already in effect within
a Project partner’s organisation, then such partner is required to advise the
Project’s Security Advisory Board, as per Article 6.9 of the Consortium
Agreement, in order to warrant its approval. In the event of divergence with
applicable Project practices, an ad hoc solution will be required.
Apart from formal legal requirements and internal regulations already in
effect, Project partners are also advised to observe soft law requirements as
well, while drafting their data management policies. The latter is a
particularly critical point, in the sense that soft law, although not of a
binding nature, should be considered applicable for the Project’s purposes,
because it will enhance compliance and will constitute Project best practices.
In essence, guidance issued by EU agencies, such as ENISA, or industry bodies,
for example applicable technical standards, although not of a formal legal
status, needs to be taken into account by project partners. By doing so, they
will adhere to the highest possible regulatory standards, which will in turn
warrant successful Project execution.
### Ethical standards on data management
Apart from legal requirements, ethical standards may too be applicable in data
management. The FORTIKA project has well acknowledged this fact, and will
issue a relevant guide, under WP2 (D2.4). In addition, it operates throughout
Project execution an Ethical Helpdesk, reporting regularly under the Project,
that is aimed at addressing relevant partners’ concerns. Issues such as
confidentiality, security and awareness are addressed under this thematic.
Here it is enough to be noted that partners are advised to apply the Ethics
Manual and Guidelines, and make use of the Ethical Helpdesk (section 1.3),
while drafting their data management policies.
In the event that ethical codes of practice or other relevant guidance is
already in place within organization, this fact needs to be announced to the
Project’s Security Advisory Board, in order for compatibility with the
project’s own guidelines to be confirmed. In the event of divergence, the
issue will need to be addressed on an ad hoc basis.
## Data storage
Data is an asset that itself has a value; accordingly storing 2 the data
provides the capability to preserve value. When data needs to be stored there
may arise implications absent until this point in the data lifecycle (e.g. the
most relevant are the format and how the data will be stored). In this
context, we list some characteristics of this process that will be taken into
consideration during the project’s lifetime:
* Format to be used for storing data
* Duration of the storage
* Plan for reusing the data
* Intended community that uses this data
* Data protection for security and privacy reasons
## Data preservation
Similarly, the preservation of the data is of equal importance as their
storage. As of this, a durable threat for the stored data is introduced, which
is no other than the secure and intact maintenance of both the content and the
format. In a wider definition preservation of data defines the process of
formulating and defining the ways to that guarantee the data will remain
during the time and that will be useful for the same purpose which it was
created and stored for. Some general characteristics about the data in this
process that will be taken into consideration during the project’s lifetime
include:
* Migrate the data to best format
* Migrate the data to suitable medium
* Back-up stored data to preserve the information
* Create metadata and documentation for the stored data
* Archive data in a defined physical or virtual medium
* Secure mechanism against threats to delate, change, stole, etc. the data
## Data access
Providing access in adequate ways to the data lies within FORTIKA’s
conventional obligations. The accessibility of data is in general a complex
process since it has to be able to facilitate the data access for any demand.
For instance, whenever the demand comes from multiple parties the access
process is increasing. This is not only due to the specific activities
associated with every access request, but also because certain parts of the
lifecycle have to be repeated to generate new data. In this respect, data
access will also define ways to: – distribute Re-distribute the data
* share and format of the data
* security and privacy mechanisms to control the accessibility
* establish copyright for the data
* promote data
## Data analysis
FORTIKA focuses on providing services for Data Analysis. Analysis of data has
in general several implications following the original data representation and
the basic information provided when the data was created. Data discovery and
data analysis reside together in the FORTIKA lifecycle simply because the
demands for the information is to generate a meaningful outcome that helps to
inform, observe, visualize features of the data, through the dedicated tools
(e.g. SIEM, visual analytics in the DSS, etc.).
Apart from the tools that will be developed within the project. Among others,
the idea behind this is to offer an enhanced version of the data discovery &
analysis system Snort 3 . Our aim is to provide powerful network traffic
analysis tool that can be used as an intrusion detection/prevention system
capable of performing real-time traffic analysis on IP networks that will
built upon technologies like:
* Detection of buffer overflows, stealth port scans, CGI attacks, SMB bombs, OS fingerprinting attempt, etc.
* intrusion detection capabilities that can be defined by specifying the:
* protocol used for the network traffic
* IP address which has to be observed
* port number that gives access to a system
* browser which is installed
* application that can be a point of risk on a system
## Data re-use & transformation
The re-usability of the data and the ways to transform it into meaningful
results forms a significant process in the FORTIKA Data Management Lifecycle.
In this respect, the correct collection of data will be performed through an
appropriate transformation mechanism, able to facilitate data re-usability and
to allow for data repurposing. In this respect, FORTIKA aims to make best use
of this process by means of offering re-usability and transformation tools to
annotate, describe and share data as follow:
* Follow-up data along other lifecycles
* New Data lifecycles – Undertake information about the produced data
* Find points for improving data
* Learn from the shared data
* Identify better mechanisms for sharing data
* Define post processing beyond re-using and transforming data
# FORTIKA data management plan
In this section the initial data to be provided and/or produced by the
project’s end-users and technical partners, respectively, have been identified
and presented. It should be noted that the idea behind this initial reporting
of the data that the project will deal with, analyse, produce and
built/validate its technologies upon is the following one: First the data
released by the end users (i.e. MOTIVIAN, ALKE, NEME, Obrela & WATT) have been
reported while then, the technical partners have studied how this data should
and will be processed, what kind of metadata can be extracted and/or generated
and most importantly what other network related data can be (seamlessly)
collected from different layers of an SME’s network infrastructure and how
this can be utilized in order to serve the cybersecurity scopes of the
project.
Last but not least, it should be noted that due to the current early phase of
the project, not all partners were able to identify the exact data with which
they will contribute to the project’s needs and the corresponding DMP, while
many of the reported data hereafter may change in the next few months. In any
case, a more complete version of the DMP will be delivered in M18 as described
in Section 1.5.
## FORTIKA Datasets
The following subsections consist of the data management analysis for every
identified dataset.
### Motivian dataset
**Table 1: Dataset Characteristics**
<table>
<tr>
<th>
_**#** _
</th>
<th>
_**Motivian SMS Gateway** _
</th> </tr>
<tr>
<td>
**1**
</td>
<td>
Name
</td> </tr>
<tr>
<td>
**2**
</td>
<td>
Surname
</td> </tr>
<tr>
<td>
**3**
</td>
<td>
Date of birth (not year)
</td> </tr>
<tr>
<td>
**4**
</td>
<td>
Type of mobile device (Android/iPhone)
</td> </tr>
<tr>
<td>
**5**
</td>
<td>
Mobile number
</td> </tr>
<tr>
<td>
**6**
</td>
<td>
Number of SMSs sent per day
</td> </tr>
<tr>
<td>
**7**
</td>
<td>
Time taken for whole transmission of SMSs.
</td> </tr>
<tr>
<td>
**8**
</td>
<td>
Date and time of transmitted data
</td> </tr> </table>
<table>
<tr>
<th>
**Dataset name <DS. Motivian.SMS_Gateway> **
</th> </tr>
<tr>
<td>
**Metadata & standards **
</td> </tr>
<tr>
<td>
**Dataset description**
</td>
<td>
</td>
<td>
_The dataset is a string that comprises of the Name, Surname, date of birth
and type of mobile device for each user the campaign SMS gateway server sends
a message. The message varies per campaign and is not stored. We don’t foresee
any future sub-datasets._
_Data that can also be monitored is number of SMSs, transmission period, date
and time of transmission._
</td> </tr>
<tr>
<td>
**Data source (i.e.**
**device)**
</td>
<td>
**which**
</td>
<td>
_The dataset is collected online through Motivian’s SMS Gateway server and is
stored on a separate server hosting the database. The devices used are Linux
based servers in a 3-Tier architecture._
</td> </tr>
<tr>
<td>
**Information about metadata (Production and storage dates, places) and**
</td>
<td>
_Aggregated data concerning the frequency of sent messages or phone-calls will
be used for visualizing dependences between_
</td> </tr> </table>
<table>
<tr>
<th>
**documentation**
</th>
<th>
_mobile users. Information of interest would be:_
* _How many messages are sent between people of_
_similar age (e.g. 30 to 40 years old),_
* _How many messages are sent per hour (which hours of day have more traffic),_
* _What type of device is used (e.g. android, iPhone) and with what frequency._
_By visualizing such information abnormalities in the frequency of sent
messages can be detected and investigated._
</th> </tr>
<tr>
<td>
**Standards, format,**
**estimated volume of data**
</td>
<td>
_The data format is specific and is provided by the mobile operator. The
structure is as follows: - Name, alpha string 20 chars_
* _Surname, alpha string 40 chars_
* _Date, 4 digits (e.g. 30/11 for Nov,30)_
* _Device Type, alphanumeric, 20 chars_
* _Mobile number, numeric 20_
_The estimated volume can vary from 10.000 SMS per day to 1M SMS per day_
</td> </tr>
<tr>
<td>
**Partners’ activities & responsibilities **
</td> </tr>
<tr>
<td>
**Device owner**
</td>
<td>
_Motivian_
</td> </tr>
<tr>
<td>
**Partner in charge of data collection**
</td>
<td>
_Motivian_
</td> </tr>
<tr>
<td>
**Partner in charge of data storage**
</td>
<td>
_Motivian_
</td> </tr>
<tr>
<td>
**Partner in charge of data analysis**
</td>
<td>
_Motivian_
</td> </tr>
<tr>
<td>
**Data storage & preservation **
</td> </tr>
<tr>
<td>
**Data storage, including** _The data is stored on Motivian servers (database
and storage)._
**backups** _The information is owned by Motivian’s clients with whom_
_Motivian has signed an agreement._
_Data is offloaded after approximately 1 month._
_There is no limitation in archiving._
_For the period of 1 month archived data can reach 100M SMSs._
</td> </tr>
<tr>
<td>
**Data access, sharing & reuse **
</td> </tr>
<tr>
<td>
**Data exploitation**
</td>
<td>
_Data collected can be processed and run analytics on how many SMS have been
sent per day or per the whole campaign. This data is only for internal
consortium consumption or exploitation._
</td> </tr>
<tr>
<td>
**Data access policy/ dissemination level (confidential, only for members of
the consortium and the commission**
</td>
<td>
_The full dataset will be confidential and only the members participating in
the deployment will have access on it. Furthermore, if the dataset or specific
portions of it (e.g. metadata, statistics, etc.) are decided to become of
widely open access, it will be uploaded to the FORTIKA open data platform._
</td> </tr>
<tr>
<td>
**services, public)**
</td>
<td>
_Of course, these data will be anonymized, in order to avoid any potential
ethical issues with their publication and dissemination. Care must be taken so
that the data is only seen by Motivian employees, who are accredited to see
the data._
</td> </tr>
<tr>
<td>
**Data sharing, re-use and distribution**
</td>
<td>
_Data sharing policies have not been decided yet. Since the dataset in this
case contains personal information (name, surname) we envisage that no data
sharing will take place. The only data to be shared is number of SMSs,
transmission period, date and time of transmission._
</td> </tr>
<tr>
<td>
**Embargo periods (if any)**
</td>
<td>
_No there Is no embargo period before data is shared. Embargo only applies to
Motivian to disclose any personal information and to the employees of
Motivian._
</td> </tr> </table>
### ALKE datasets
**Table 2: Dataset List**
<table>
<tr>
<th>
_**#** _
</th>
<th>
_**ALKE dataset** _
</th> </tr>
<tr>
<td>
**1**
</td>
<td>
Data exchange between the company intranet and the web.
</td> </tr>
<tr>
<td>
**2**
</td>
<td>
Data exchange between the company and electric vehicles on the field.
</td> </tr> </table>
<table>
<tr>
<th>
**Dataset name <DS. ** **ALKE.intranet > **
</th> </tr>
<tr>
<td>
**Metadata & standards **
</td> </tr>
<tr>
<td>
**Dataset**
</td>
<td>
_Data exchange between the company intranet and the web._
</td> </tr>
<tr>
<td>
**Data source (i.e. which device)**
</td>
<td>
_Company intranet._
</td> </tr>
<tr>
<td>
**Information about metadata (Production and storage dates, places) and
documentation**
</td>
<td>
_Aggregated data concerning the number of sent e-mails per hour, or the number
of transported ftp files per hour, and by which users, can be used in order to
produce visualized outputs and detect abnormalities in the behaviour of
users._
</td> </tr>
<tr>
<td>
**Standards, format,**
**estimated volume of data**
</td>
<td>
_Any kind of data can be transmitted to the web from the company network
(emails, remote terminals, ftp files, etc). There is not a fixed dataset, it
is an amount of data going in and out._
</td> </tr>
<tr>
<td>
**Partners’ activities & responsibilities **
</td> </tr>
<tr>
<td>
**Device owner**
</td>
<td>
_ALKE_
</td> </tr>
<tr>
<td>
**Partner in charge of data collection**
</td>
<td>
_ALKE_
</td> </tr>
<tr>
<td>
**Partner in charge of data storage**
</td>
<td>
_ALKE_
</td> </tr>
<tr>
<td>
**Data storage, including backups**
</td>
<td>
_-_
</td> </tr>
<tr>
<td>
**Data access, sharing & reuse **
</td> </tr>
<tr>
<td>
**Data exploitation**
</td>
<td>
_In order to deal with security problems such as security of incoming/outgoing
communications._
</td> </tr>
<tr>
<td>
**Data access policy/ dissemination level (confidential, only for members of
the consortium and the commission services,**
**public)**
</td>
<td>
_-_
</td> </tr>
<tr>
<td>
**Data sharing, re-use and distribution**
</td>
<td>
_-_
</td> </tr> </table>
<table>
<tr>
<th>
**Dataset name <DS. ** **ALKE.Electric_vehicle > **
</th> </tr>
<tr>
<td>
**Metadata & standards **
</td> </tr>
<tr>
<td>
**Dataset**
</td>
<td>
_Data exchange between the company and electric vehicles on the field_
</td> </tr>
<tr>
<td>
**Data source (i.e. which device)**
</td>
<td>
_Electric vehicles_
</td> </tr>
<tr>
<td>
**Information about metadata (Production and storage dates, places) and
documentation**
</td>
<td>
_Aggregated data concerning the controller status, dashboard alerts, dashboard
light status, on board temperature, motor temperature, onboard devices status,
vehicle’s energy consumption, etc., can be used in order to detect
abnormalities in the behaviour of electric vehicles._
</td> </tr>
<tr>
<td>
**Standards, format,**
**estimated volume of data**
</td>
<td>
_The data collected from the vehicle and shared in the cloud are the
following:_
* _Vehicle ID,_
* _total Km covered by the vehicle,_
* _partial Km,_
* _drive mode,_
* _gearbox status (neutral, forward, backward),_
* _driver presence,_
* _battery cycles,_
* _energy charged,_
* _energy delivered,_
* _regenerative energy produced,_
* _total energy,_
* _battery Voltage,_
* _vehicle status,_
* _controller status,_
* _charger status,_
* _battery data logger status,_
* _GPS status,_
* _GPS position,_
* _date,_
* _time,_
* _dashboard alerts,_
</td> </tr>
<tr>
<td>
</td>
<td>
* _dashboard light status,_
* _on board temperature,_
* _motor temperature,_
* _onboard devices status_
_(other functions are under development so not yet finalized, thus this list
may increase)._
</td> </tr>
<tr>
<td>
**Partners’ activities & responsibilities **
</td> </tr>
<tr>
<td>
**Device owner**
</td>
<td>
_ALKE_
</td> </tr>
<tr>
<td>
**Partner in charge of data collection**
</td>
<td>
_ALKE_
</td> </tr>
<tr>
<td>
**Partner in charge of data storage**
</td>
<td>
_ALKE_
</td> </tr>
<tr>
<td>
**Data storage, including backups**
</td>
<td>
_Stored in the cloud._
</td> </tr>
<tr>
<td>
**Data access, sharing & reuse **
</td> </tr>
<tr>
<td>
**Data exploitation**
</td>
<td>
_In order to deal with security problems such as security issues that can
affect the system relates mainly to intrusion at level of vehicle body
computer or into the ALKE cloud platform with stolen data or in most negative
case with intrusion at level of control unit taking control of some activity
of the vehicle, not only getting data._
</td> </tr>
<tr>
<td>
**Data access policy/ dissemination level (confidential, only for members of
the consortium and the commission services,**
**public)**
</td>
<td>
_The only sensitive data that can link to personal data are in some very
specific circumstances the name of the driver in presence of specific request
for fleet management purposes._
</td> </tr>
<tr>
<td>
**Data sharing, re-use and distribution**
</td>
<td>
_-_
</td> </tr> </table>
### TEC datasets
**Table 3: Dataset List**
<table>
<tr>
<th>
_**#** _
</th>
<th>
_**TEC Dataset.** _
</th> </tr>
<tr>
<td>
**1**
</td>
<td>
_TIMIT data._
</td> </tr>
<tr>
<td>
**2**
</td>
<td>
_Twitter data._
</td> </tr>
<tr>
<td>
**3**
</td>
<td>
_Switchboard LDC97S62 data._
</td> </tr> </table>
<table>
<tr>
<th>
**Dataset name <DS. TEC.TIMIT> **
</th> </tr>
<tr>
<td>
**Metadata & standards **
</td> </tr>
<tr>
<td>
**Dataset description**
</td>
<td>
_Access to TIMIT data corpus which was purchased as part of another EU
project._
</td> </tr>
<tr>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Data source (i.e. which device)**
</td>
<td>
_TIMIT data corpus._
</td> </tr>
<tr>
<td>
**Information about metadata (Production and storage dates, places) and**
**documentation**
</td>
<td>
_-_
</td> </tr>
<tr>
<td>
**Standards, format,**
**estimated volume of data**
</td>
<td>
_The size of the dataset we hold for research purposes is 19MB stored as plain
text file._
</td> </tr>
<tr>
<td>
**Partners’ activities & responsibilities **
</td> </tr>
<tr>
<td>
**Device owner**
</td>
<td>
_TEC_
</td> </tr>
<tr>
<td>
**Partner in charge of data collection**
</td>
<td>
_TEC_
</td> </tr>
<tr>
<td>
**Partner in charge of data storage**
</td>
<td>
_TEC_
</td> </tr>
<tr>
<td>
**Partner in charge of data analysis**
</td>
<td>
_TEC_
</td> </tr>
<tr>
<td>
**Data storage & preservation **
</td> </tr>
<tr>
<td>
**Data storage, including backups**
</td>
<td>
_The size of the dataset we hold for research purposes is 19MB stored as plain
text file._
</td> </tr>
<tr>
<td>
**Data access, sharing & reuse **
</td> </tr>
<tr>
<td>
**Data exploitation**
</td>
<td>
_Access to TIMIT data corpus which was purchased as part of another EU
project._
</td> </tr>
<tr>
<td>
**Data access policy/ dissemination level (confidential, only for members of
the consortium and the commission**
**services, public)**
</td>
<td>
_The data is open source and hence there are no confidentiality and privacy
issues._
</td> </tr>
<tr>
<td>
**Data sharing, re-use and distribution**
</td>
<td>
\-
</td> </tr> </table>
<table>
<tr>
<th>
**Dataset name <DS. TEC.Twitter> **
</th> </tr>
<tr>
<td>
**Metadata & standards **
</td> </tr>
<tr>
<td>
**Dataset description**
</td>
<td>
_Data from Twitter users._
</td> </tr>
<tr>
<td>
**Data source (i.e. which device)**
</td>
<td>
_Twitter._
</td> </tr>
<tr>
<td>
**Information about metadata (Production and**
</td>
<td>
_Aggregated information concerning the frequency of referenced words,
frequently referenced links (e.g. malicious_
</td> </tr>
<tr>
<td>
**storage dates, places) and documentation**
</td>
<td>
_links sent by multiple users) and the location of such twits, can be used in
order to detect affected twitter accounts._
</td> </tr>
<tr>
<td>
**Standards, format,**
**estimated volume of data**
</td>
<td>
_The size of the dataset we hold for research purposes is 10MB in XML format_
</td> </tr>
<tr>
<td>
**Partners’ activities & responsibilities **
</td> </tr>
<tr>
<td>
**Device owner**
</td>
<td>
_TEC_
</td> </tr>
<tr>
<td>
**Partner in charge of data collection**
</td>
<td>
_TEC_
</td> </tr>
<tr>
<td>
**Partner in charge of data storage**
</td>
<td>
_TEC_
</td> </tr>
<tr>
<td>
**Partner in charge of data analysis**
</td>
<td>
_TEC_
</td> </tr>
<tr>
<td>
**Data storage & preservation **
</td> </tr>
<tr>
<td>
**Data storage, including backups**
</td>
<td>
_The size of the dataset we hold for research purposes are as follows is 10MB
in XML format_
</td> </tr>
<tr>
<td>
**Data access, sharing & reuse **
</td> </tr>
<tr>
<td>
**Data exploitation**
</td>
<td>
_Data from Twitter users._
</td> </tr>
<tr>
<td>
**Data access policy/ dissemination level (confidential, only for members of
the consortium and the commission**
**services, public)**
</td>
<td>
_These data have sensitive attributes and hence we anonymize these data with
state of the art techniques. In addition, we apply the highly secure
homomorphic encryption to carry out any analysis on the encrypted domain._
</td> </tr>
<tr>
<td>
**Data sharing, re-use and distribution**
</td>
<td>
\-
</td> </tr> </table>
<table>
<tr>
<th>
**Dataset name <DS. TEC. _Switchboard_LDC97S62_ > **
</th> </tr>
<tr>
<td>
**Metadata & standards **
</td> </tr>
<tr>
<td>
**Dataset description**
</td>
<td>
_Access to the switchboard LDC97S62 dataset that is publicly available for
research purposes._
</td> </tr>
<tr>
<td>
**Data source (i.e. which device)**
</td>
<td>
-LDC97S62 dataset.
</td> </tr>
<tr>
<td>
**Information about metadata (Production and storage dates, places) and**
**documentation**
</td>
<td>
\-
</td> </tr>
<tr>
<td>
**Standards, format,**
**estimated volume of data**
</td>
<td>
_The size of the dataset we hold for research purposes is 20MB._
</td> </tr>
<tr>
<td>
**Partners’ activities & responsibilities **
</td> </tr>
<tr>
<td>
**Device owner**
</td>
<td>
_TEC_
</td> </tr>
<tr>
<td>
**Partner in charge of data collection**
</td>
<td>
_TEC_
</td> </tr>
<tr>
<td>
**Partner in charge of data storage**
</td>
<td>
_TEC_
</td> </tr>
<tr>
<td>
**Partner in charge of data analysis**
</td>
<td>
_TEC_
</td> </tr>
<tr>
<td>
**Data storage & preservation **
</td> </tr>
<tr>
<td>
**Data storage, including** _The size of the dataset we hold for research
purposes is 20MB._
**backups**
</td> </tr>
<tr>
<td>
**Data access, sharing & reuse **
</td> </tr>
<tr>
<td>
**Data exploitation**
</td>
<td>
_Access to the LDC97S62 dataset that is publicly available for research
purposes._
</td> </tr>
<tr>
<td>
**Data access policy/ dissemination level (confidential, only for members of
the consortium and the commission**
**services, public)**
</td>
<td>
The sensitive data we use for the evaluation of the security components are
all protected using the state of the art encryption tools.
</td> </tr>
<tr>
<td>
**Data sharing, re-use and distribution**
</td>
<td>
\-
</td> </tr> </table>
### Nemetschek dataset
**Table 4: Dataset List**
<table>
<tr>
<th>
_**#** _
</th>
<th>
_**Nemetschek dataset** _
</th> </tr>
<tr>
<td>
**1**
</td>
<td>
Data exchange between the company intranet and the web.
</td> </tr> </table>
<table>
<tr>
<th>
**Dataset name <DS. Nemetschek.intranet> **
</th> </tr>
<tr>
<td>
**Metadata and standards**
</td> </tr>
<tr>
<td>
**Dataset description**
</td>
<td>
_Data exchange between the company intranet and the web._
</td> </tr>
<tr>
<td>
**Data source (i.e. which**
**device)**
</td>
<td>
* _Nemetschek has certified ISO 27001 compliant ISMS and ISO 9001 QMS_
* _Complex ITC infrastructure undergoing major restructuring._
◦ _More than 250 workstations (desktops, laptops) and numerous mobile/portable
devices,_
◦ _More than 15 servers (AD controllers, application and file servers)._
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
◦
</th>
<th>
_Fail-_
▪
</th>
<th>
_over internet connectivity_
_2 connections to 2 independent ISP providers (possibility of adding a third
one or replacing one of old providers is currently under investigation)_
</th> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
▪
</td>
<td>
_2 enterprise Cisco routers (with BGP protocol)_
_2 enterprise Cisco ASA Firewalls (plus 1 legacy_
_CISCO ASA device with specific functions)_
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
▪
</td>
<td>
_2 Sophos UTM devices used as VPN tunnel termination points and mail-
relay/antivirus/anti-spam_
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
◦
</td>
<td>
_New branch office to accommodate part of the Virtualization and Storage
infrastructure and is safely connected to the headquarters and to Internet_
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
_CISCO IDS modules on the ASA firewalls_
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
_Enterprise anti-malware protection system based on Sophos software_
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
_ITC monitoring infrastructure is based on Zabbix – with main focus on
availability of important of Nemetschek own or partner important systems and
resource availability of some particularly important servers_
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
_On-premise VOIP system (based on Asterisk)_
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
_LAN is Windows AD 2012 R2 based (future upgrade to 2016)_
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
_Trusted relationships between domains with 1 foreign partner_
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
_5 VPNs with foreign development partners (4 Point to point tunnels, 1 one
–to-many vpn)_
◦ _Employees of Nemetschek and partner representatives need to access systems
on the both sides of the VPNs_
◦ _Some of the partners have certified ISO 27001 ISMS and all of them - strict
Info Security Requirements_
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
_Office VPN is heavily used from employees at home office or on business
trips_
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
_2 Wi-Fi networks (Company network and Guest) with constantly increasing
workload_
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
_Complex e-mail infrastructure_
◦ _IBM Domino based 2 main servers (clustered), one server for WEB/mobile
access, 2 mail-relay/antispam/anti-malware Sophos UTM Devices_
◦ _Moving to MS Exchange (cloud based or on premise) is currently
investigated_
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
_Certificate authority infrastructure is undergoing_
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
_improvement and restructuring_
* _Number of publicly accessible web sites hosted within company infrastructure is increasing steadily_
* _Virtualization infrastructure based on VMWare and CITRIX hypervisors_
* _Implementation of application and desktop virtualization (CITRIX)_
* _Skype for Business infrastructure established (needed by one of the development teams) is under investigation_
* _Video conference systems (Cisco Telepresence) are established at the headquarters and new branch office and need to be connected with similar systems at partner sites_
* _Representatives of the support unit of the Sales Dept._
_have to visit customers or remotely access their IT infrastructure_
* _Most of the Information systems are based on premise but future use of cloud based (IAAS, PAAS) or hybrid infrastructure is under investigation_
* _Increasing number of security sensitive information systems which should be accessed only by authorized users (Payroll and Paid Leave systems for the whole company, systems in the Finance/HR Department)_
</th> </tr>
<tr>
<td>
**Information about metadata (Production and storage dates, places) and**
**documentation**
</td>
<td>
_Aggregated data concerning the number of sent e-mails per hour, or the number
of transported ftp files per hour, and by which users, can be used in order to
produce visualized outputs and detect abnormalities in the behaviour of
users._
</td> </tr>
<tr>
<td>
**Standards, format, estimated volume of data**
</td>
<td>
_Any kind of data can be transmitted to the web from the company network
(emails, remote terminals, ftp files, etc). There is not a fixed dataset, it
is an amount of data going in and out._
</td> </tr>
<tr>
<td>
**Partners’ activities and responsibilities**
</td> </tr>
<tr>
<td>
**Device owner**
</td>
<td>
_Nemetschek_
</td> </tr>
<tr>
<td>
**Partner in charge of data collection**
</td>
<td>
_Nemetschek_
</td> </tr>
<tr>
<td>
**Partner in charge of data storage**
</td>
<td>
_Nemetschek_
</td> </tr>
<tr>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Data storage, including backups**
</td>
<td>
_-_
</td> </tr>
<tr>
<td>
**Data access, sharing and reuse**
</td> </tr>
<tr>
<td>
**Data exploitation**
</td>
<td>
_FORTIKA Systems/Technologies/elements could benefit from piloting_
* _Hardware or/and virtual appliances_
* _Intrusion prevention_
* _Better analysis of traffic to security sensitive systems within our ICT_
* _Better protection of publicly accessible systems and systems for internal use (including e-mail system) from cyber attacks from outside/inside Nemetschek ITC infrastructure_
* _Monitoring traffic to/from Gest Wi-Fi network and detecting cyber attacks from users of that network_
</td> </tr>
<tr>
<td>
**Data access**
**dissemination**
**(confidential,**
**members of the consortium and the commission public)**
</td>
<td>
**only services,**
</td>
<td>
**policy/ level for**
</td>
<td>
* _Installation and configuration should not cause disruption of ICT infrastructure functioning, especially connections (VPNs, domain trusts) with partners, and publicly accessible web sites_
* _Implementation and operation on FORTIKA solutions should not lead to significant changes in the configuration of Nemetschek ICT infrastructure (changes in the routing between networks, changes in IP address space, AS for external IP addresses, IP addresses for VPN tunnel terminations, changes in IP addresses of systems which are accessed by second and third parties (partners and customers)_
* _All traffic between Nemetschek and its partners and customers should be protected from unauthorized access of organizations participating in FORTIKA project_
* _Implementation (installation, configuration), and operation of FORTIKA systems/elements should not present significant increase the load on ITS personnel (system administrators) - steep learning curve, time for system monitoring_
</td> </tr>
<tr>
<td>
**Data sharing, distribution**
</td>
<td>
**re-use**
</td>
<td>
**and**
</td>
<td>
\-
</td> </tr>
<tr>
<td>
**Embargo periods (if any)**
</td>
<td>
_-_
</td> </tr> </table>
### SEARS Dataset
In this subsection SEARS dataset containing information about users and their
chat activity is given.
Nowadays, SME employees are using cyber chat tools to better serve customers
or to meet other work needs. The characteristics of SEARS dataset for social
engineering recognition is given in the following matrix.
**Table 5: SEARS Dataset Characteristics**
<table>
<tr>
<th>
_**#** _
</th>
<th>
**SEARS features**
</th> </tr>
<tr>
<td>
**1**
</td>
<td>
_Username._
</td> </tr>
<tr>
<td>
**2**
</td>
<td>
_Dialogue._
</td> </tr>
<tr>
<td>
**3**
</td>
<td>
_Timestamp._
</td> </tr>
<tr>
<td>
**4**
</td>
<td>
_Exposure Time._
</td> </tr>
<tr>
<td>
**5**
</td>
<td>
_Interlocutors Common History._
</td> </tr> </table>
Information concerning SEARS dataset is given in the following matrix.
<table>
<tr>
<th>
**Dataset name <DS. ** **UOM.SEARS > **
</th> </tr>
<tr>
<td>
**Metadata and standards**
</td> </tr>
<tr>
<td>
**Dataset description**
</td>
<td>
_The dataset is comprised of characterized chat text dialogues between
attacker – victims and usual chat dialogues. The dataset is accompanied with
UOM’s slang language dictionary._
</td> </tr>
<tr>
<td>
**Data source (i.e. which**
**device)**
</td>
<td>
_Data will be collected during pilot use cases._
</td> </tr>
<tr>
<td>
**Information about metadata (Production and storage dates, places) and**
**documentation**
</td>
<td>
_The content of metadata includes chat messages, timestamps and categorical
data. These data can be used in order to categorize a dialogue and infer about
social engineering attempts._
</td> </tr>
<tr>
<td>
**Standards, format, estimated volume of data**
</td>
<td>
_The primary data formats of the final data set will CSV and JSON formats.
Moreover, special chat storage formats, like chat trees will be used in
combination with MongoDB._
</td> </tr>
<tr>
<td>
**Partners’ activities and responsibilities**
</td> </tr>
<tr>
<td>
**Device owner**
</td>
<td>
_-_
</td> </tr>
<tr>
<td>
**Partner in charge of data collection**
</td>
<td>
_UOM_
</td> </tr>
<tr>
<td>
**Partner in charge of data storage**
</td>
<td>
_UOM_
</td> </tr>
<tr>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Data storage, including backups**
</td>
<td>
_-_
</td> </tr>
<tr>
<td>
**Data access, sharing and reuse**
</td> </tr>
<tr>
<td>
**Data exploitation**
</td>
<td>
_The dataset will be used in SEARS module to estimate social engineering
attack risk._
</td> </tr>
<tr>
<td>
**Data access policy/ dissemination level (confidential, only for members of
the consortium and the commission services, public)**
</td>
<td>
_The full dataset will be confidential during the FORTIKA project and it will
be available to FORTIKA . Furthermore, if the dataset or specific portions of
it (e.g. metadata, statistics, etc.) are decided to become of widely open
access, it will be uploaded to the FORTIKA open data platform. Of course, the
data will be anonymized, in order to avoid any potential_
</td> </tr>
<tr>
<td>
</td>
<td>
_ethical issues with their publication and dissemination._
</td> </tr>
<tr>
<td>
**Data sharing, re-use and**
**distribution**
</td>
<td>
_The dataset will be available on FORTIKA members upon request to UoM
representative Prof. Ioannis MAVRIDIS, via email. The purpose of dataset
sharing request should be expressed clearly to UoM’s representative. After,
the approval of dataset sharing, access to dataset will be provided to the
requester via an email._
</td> </tr>
<tr>
<td>
**Embargo periods (if any)**
</td>
<td>
_-_
</td> </tr> </table>
### ABAC Dataset
Access request in an Attribute-Based access control system is consisted of
both subject (requestor) and object (resource) attributes, as well as a set of
attributes related to the context of the request, namely environmental
attributes. The aforementioned attribute sets vary between each individual
case and will be set during the implementation process, depending of each host
requirements.
**Table 6: Dataset Characteristics**
<table>
<tr>
<th>
_**#** _
</th>
<th>
**ABAC**
</th> </tr>
<tr>
<td>
**1**
</td>
<td>
_Subject Attributes._
</td> </tr>
<tr>
<td>
**2**
</td>
<td>
_Object attributes_
</td> </tr>
<tr>
<td>
**3**
</td>
<td>
_Environmental Attributes._
</td> </tr>
<tr>
<td>
**4**
</td>
<td>
_Access Decision_
</td> </tr> </table>
Information concerning the ABAC dataset is given in the following matrix.
<table>
<tr>
<th>
**Dataset name <DS.UoM.ABAC> **
</th> </tr>
<tr>
<td>
**Metadata and standards**
</td> </tr>
<tr>
<td>
**Dataset description**
</td>
<td>
_ABAC Dataset_
</td> </tr>
<tr>
<td>
**Data source (i.e. which**
**device)**
</td>
<td>
_Data transmitted from requestor or sensed from the module itself._
</td> </tr>
<tr>
<td>
**Information about metadata (Production and storage dates, places) and**
**documentation**
</td>
<td>
_Metadata information about an attribute in an ABAC implementation can be kept
allowing for a party to obtain greater understanding of how the attribute and
its value were obtained, determined, and vetted and provide more effective
policy creation and application. Metadata do not concern the individual or the
entity itself but the attribute only._
</td> </tr>
<tr>
<td>
**Standards, format, estimated volume of data**
</td>
<td>
_-_
</td> </tr>
<tr>
<td>
**Partners’ activities and responsibilities**
</td> </tr>
<tr>
<td>
**Device owner**
</td>
<td>
_-_
</td> </tr>
<tr>
<td>
**Partner in charge of data collection**
</td>
<td>
_UoM_
</td> </tr>
<tr>
<td>
**Partner in charge of data storage**
</td>
<td>
_-_
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Data storage, backups**
</td>
<td>
**including**
</td>
<td>
_-_
</td> </tr>
<tr>
<td>
</td>
<td>
**Data access, sharing and reuse**
</td> </tr>
<tr>
<td>
**Data exploitation**
</td>
<td>
</td>
<td>
_The data will be used to evaluate decision access for a subject over a
resource in certain time. Record in terms of auditing might be kept._
</td> </tr>
<tr>
<td>
**Data access**
**dissemination**
**(confidential,**
**members of the consortium and the commission services, public)**
</td>
<td>
**policy/ level only for**
</td>
<td>
_The full dataset will be confidential and only the members participating in
the deployment will have access on it. There is no intention on publishing
these data._
</td> </tr>
<tr>
<td>
**Data sharing, re-use and**
**distribution**
</td>
<td>
_-_
</td> </tr>
<tr>
<td>
**Embargo periods (if any)**
</td>
<td>
_-_
</td> </tr> </table>
### NEXTEL Dataset
This dataset will describe Risk Assessment data set. The following table
describe a set of files or metadata containing the following information.
**Table 7: Risk Assessment Dataset Characteristics**
<table>
<tr>
<th>
_**#** _
</th>
<th>
**Risk Assessment**
</th> </tr>
<tr>
<td>
**1**
</td>
<td>
_Assets inventory_
</td> </tr>
<tr>
<td>
**2**
</td>
<td>
_Threats catalogue_
</td> </tr>
<tr>
<td>
**3**
</td>
<td>
_Vulnerability reports_
</td> </tr>
<tr>
<td>
**4**
</td>
<td>
_Security effectiveness for running security controls_
</td> </tr>
<tr>
<td>
**5**
</td>
<td>
_User directory_
</td> </tr>
<tr>
<td>
**6**
</td>
<td>
_Security Policy_
</td> </tr> </table>
Detailed information for risk assessment dataset required
<table>
<tr>
<th>
**Dataset name <DS.NEXTEL.RA> **
</th> </tr>
<tr>
<td>
**Metadata and standards**
</td> </tr>
<tr>
<td>
**Dataset description**
</td>
<td>
_The dataset will support and characterize a process for risk assessment._
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
**Data source (i.e. which**
**device)**
</td>
<td>
_Source of data will come from typical characterization for SMEs
environments._
_Concrete data source will be collected during pilot._
</td> </tr>
<tr>
<td>
**Information about metadata (Production and storage dates, places) and**
**documentation**
</td>
<td>
_The content of metadata includes chat messages, timestamps and categorical
data. These data can be used in order to categorize a dialogue and infer about
social engineering attempts._
</td> </tr>
<tr>
<td>
**Standards, format, estimated volume of data**
</td>
<td>
_Different formats for data. Standard storage on relational databases._
</td> </tr>
<tr>
<td>
**Partners’ activities and responsibilities**
</td> </tr>
<tr>
<td>
**Device owner**
</td>
<td>
_SME or final use case partner. Finally, this point depends on FORTIKA license
typology. Device in FORTIKA context means FORTIKA accelerator or also named
FORTIKA gateway._
</td> </tr>
<tr>
<td>
**Partner in charge of data collection**
</td>
<td>
_Data will be collected by SMEs or use cases, where assessment need to be
done._
_Nextel while implementing/developing the service in testing and development
phase need to be in charge of data collection using different components of
the FORTIKA solution._
</td> </tr>
<tr>
<td>
**Partner in charge of data storage**
</td>
<td>
</td>
<td>
_Data storage responsibility will be in charge of the partner where risk
assessment will be conducted. Typically use cases partners. It is supposed
FORTIKA gateway could provide storage space._
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Data storage, backups**
</td>
<td>
**including**
</td>
<td>
_This is also dependant on the exploitation. In a first approach data storage
will be responsible for use case partner or SME using FORTIKA gateway._
_In a second approach a maintenance service over this gateway could be
implemented._
</td> </tr>
<tr>
<td>
</td>
<td>
**Data access, sharing and reuse**
</td> </tr>
<tr>
<td>
**Data exploitation**
</td>
<td>
</td>
<td>
_Contribute to risk assessment service_
</td> </tr>
<tr>
<td>
**Data access policy/ dissemination level (confidential, only for members of
the consortium and the commission services, public)**
</td>
<td>
_Confidential, only for members of the consortium_
</td> </tr>
<tr>
<td>
**Data sharing, re-use and**
**distribution**
</td>
<td>
_Open to re-use and distribution inside the consortium_
</td> </tr>
<tr>
<td>
**Embargo periods (if any)**
</td>
<td>
_-_
</td> </tr> </table>
### FINT Dataset
This dataset will describe FORTIKA’s GW monitoring service data set. The
following table describe a set of files or metadata containing the following
information.
**Table 8: FORTIKA GW Monitoring Dataset Characteristics**
<table>
<tr>
<th>
_**#** _
</th>
<th>
**Monitoring**
</th> </tr>
<tr>
<td>
**1**
</td>
<td>
_System monitor_
</td> </tr>
<tr>
<td>
**2**
</td>
<td>
_Services status_
</td> </tr> </table>
<table>
<tr>
<th>
**Dataset name <DS.FINT.MS> **
</th> </tr>
<tr>
<td>
**Metadata and standards**
</td> </tr>
<tr>
<td>
**Dataset description**
</td>
<td>
_The dataset will characterise the FORTIKA’s GW system and deployed services
statuses._
</td> </tr>
<tr>
<td>
**Data source (i.e. which**
**device)**
</td>
<td>
_Source of data will come from the GW’s hardware and software components (OS
and applications)._
</td> </tr>
<tr>
<td>
**Information about metadata (Production and storage dates, places) and**
**documentation**
</td>
<td>
_The content of metadata includes timestamps, numerical and categorical data.
These data can be used for assessing and reporting the performance and health
status of the FORTIKA GW._
</td> </tr>
<tr>
<td>
**Standards, format, estimated volume of data**
</td>
<td>
_Different formats for data. Standard storage on a document database._
</td> </tr>
<tr>
<td>
**Partners’ activities and responsibilities**
</td> </tr>
<tr>
<td>
**Device owner**
</td>
<td>
_SME or final use case partner. Finally, this point depends on FORTIKA license
typology. Device in FORTIKA context means FORTIKA accelerator or also named
FORTIKA gateway._
</td> </tr>
<tr>
<td>
**Partner in charge of data collection**
</td>
<td>
_Data will be collected by SMEs or use cases, where assessment need to be
done._
</td> </tr>
<tr>
<td>
**Partner in charge of data storage**
</td>
<td>
_Data storage responsibility will be of the GW owner (typically SME or use
case partners)_
</td> </tr>
<tr>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Data storage, including backups**
</td>
<td>
_Data storage will be responsible for use case partner or SME using FORTIKA
gateway._
</td> </tr>
<tr>
<td>
**Data access, sharing and reuse**
</td> </tr>
<tr>
<td>
**Data exploitation**
</td>
<td>
_Contribute to FORTIKA GW’s performance monitoring and assessment_
</td> </tr>
<tr>
<td>
**Data access policy/ dissemination level (confidential, only for members of
the consortium and the commission services, public)**
</td>
<td>
_Confidential, only for members of the consortium_
</td> </tr>
<tr>
<td>
**Data sharing, re-use and**
**distribution**
</td>
<td>
_Open to re-use and distribution inside the consortium_
</td> </tr>
<tr>
<td>
**Embargo periods (if any)**
</td>
<td>
_-_
</td> </tr> </table>
### CERTH Datasets
In this subsection datasets containing information of known threats are given.
Spam e-mails is a form of commercial advertising. Since this form of
advertising is not costly, spammers harvest recipient addresses from publicly
accessible sources, by using programs to collect addresses on the web, and
send massively advertisement e-mails. Spammers often hide the origin of their
messages in order to circumvent anti-spammer lists used by antispam software.
Nowadays more than 95% of e-mail messages sent worldwide is believed to be
spam. The characteristics of the FORTIKA dataset for spam e-mails are given in
the following matrix.
**Table 9: Dataset Characteristics**
<table>
<tr>
<th>
_**#** _
</th>
<th>
**Spam e-mails**
</th> </tr>
<tr>
<td>
**1**
</td>
<td>
_Country._
</td> </tr>
<tr>
<td>
**2**
</td>
<td>
_Autonomous system (AS)._
</td> </tr>
<tr>
<td>
**3**
</td>
<td>
_Date of creation._
</td> </tr>
<tr>
<td>
**4**
</td>
<td>
_Owner (sender)._
</td> </tr>
<tr>
<td>
**5**
</td>
<td>
_Recipient._
</td> </tr>
<tr>
<td>
**6**
</td>
<td>
_Text._
</td> </tr> </table>
Phishing e-mails are designed to deceive users to install malicious software
on their computer or to hand over personal information under false pretences.
Cybercriminals usually send such e-mails pretending to be a professional
company or organization. Such emails may contain hyper-links which are linked
to malicious sites or to malicious .exe files. Also, such e-mails are very
common to ask for account passwords on the excuse of a security breach. The
characteristics of the FORTIKA dataset for phishing e-mails are given in the
following matrix.
**Table 10: Dataset Characteristics**
<table>
<tr>
<th>
_**#** _
</th>
<th>
**Phishing e-mails**
</th> </tr>
<tr>
<td>
**1**
</td>
<td>
_Country._
</td> </tr>
<tr>
<td>
**2**
</td>
<td>
_Autonomous system (AS)._
</td> </tr>
<tr>
<td>
**3**
</td>
<td>
_Date of creation._
</td> </tr>
<tr>
<td>
**4**
</td>
<td>
_Owner (sender)._
</td> </tr>
<tr>
<td>
**5**
</td>
<td>
_Recipient._
</td> </tr>
<tr>
<td>
**6**
</td>
<td>
_Text._
</td> </tr> </table>
Malicious URLs may be sent through e-mails in order to install some malicious
software to users’ computers. The characteristics of the FORTIKA dataset for
malicious URLs are given in the following matrix.
**Table 11: Dataset Characteristics**
<table>
<tr>
<th>
_**#** _
</th>
<th>
**Malicious URLs**
</th> </tr>
<tr>
<td>
**1**
</td>
<td>
_Country._
</td> </tr>
<tr>
<td>
**2**
</td>
<td>
_Autonomous system (AS)._
</td> </tr>
<tr>
<td>
**3**
</td>
<td>
_Date of creation._
</td> </tr>
<tr>
<td>
**4**
</td>
<td>
_Owner (sender)._
</td> </tr>
<tr>
<td>
**5**
</td>
<td>
_HTML content._
</td> </tr>
<tr>
<td>
**6**
</td>
<td>
_First level neighbours._
</td> </tr> </table>
Messages which may contain malicious links or software can not only be sent by
e-mails, but from various applications (e.g. facebook, twitter, viber, etc.).
The characteristics of the FORTIKA dataset for malicious messages are given in
the following matrix. **Table 12: Dataset Characteristics**
<table>
<tr>
<th>
_**#** _
</th>
<th>
**Messages**
</th> </tr>
<tr>
<td>
**1**
</td>
<td>
_Application used._
</td> </tr>
<tr>
<td>
**2**
</td>
<td>
_Time._
</td> </tr>
<tr>
<td>
**3**
</td>
<td>
_Sender._
</td> </tr>
<tr>
<td>
**4**
</td>
<td>
_Recipient._
</td> </tr>
<tr>
<td>
**5**
</td>
<td>
_Text._
</td> </tr>
<tr>
<td>
**6**
</td>
<td>
_Data type (e.g. media)._
</td> </tr> </table>
Information concerning the threat datasets is given in the following matrix.
<table>
<tr>
<th>
**Dataset name <DS. ** **CERTH.Threats > **
</th> </tr>
<tr>
<td>
**Metadata and standards**
</td> </tr>
<tr>
<td>
**Dataset description**
</td>
<td>
_Known Threats._
</td> </tr>
<tr>
<td>
**Data source (i.e. which**
**device)**
</td>
<td>
_Data will be collected from available information of the web._
</td> </tr>
<tr>
<td>
**Information about metadata**
</td>
<td>
_Information concerning identical e-mails that were sent to_
</td> </tr>
<tr>
<td>
**(Production and storage dates, places) and**
**documentation**
</td>
<td>
_multiple users (e.g. spam/phishing e-mails) can be used in order to detect
such malicious mails. These data can be used in order to visualize such
abnormalities, while senders of such emails can be detected._
</td> </tr>
<tr>
<td>
**Standards, format, estimated volume of data**
</td>
<td>
_-_
</td> </tr>
<tr>
<td>
**Partners’ activities and responsibilities**
</td> </tr>
<tr>
<td>
**Device owner**
</td>
<td>
_-_
</td> </tr>
<tr>
<td>
**Partner in charge of data collection**
</td>
<td>
_CERTH_
</td> </tr>
<tr>
<td>
**Partner in charge of data storage**
</td>
<td>
_-_
</td> </tr>
<tr>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Data storage, including backups**
</td>
<td>
_-_
</td> </tr>
<tr>
<td>
**Data access, sharing and reuse**
</td> </tr>
<tr>
<td>
**Data exploitation**
</td>
<td>
_The data will be used to visualize abnormalities and detect cyber security
threats._
</td> </tr>
<tr>
<td>
**Data access policy/ dissemination level (confidential, only for members of
the consortium and the commission services, public)**
</td>
<td>
_The full dataset will be confidential and only the members participating in
the deployment and/or the consortium will have access on it. Furthermore, if
the dataset or specific portions of it (e.g. metadata, statistics, etc.) are
decided to become of widely open access, it will be uploaded to the FORTIKA
open data platform. Of course, these data will be anonymized, in order to
avoid any potential ethical issues with their publication and dissemination_
</td> </tr>
<tr>
<td>
**Data sharing, re-use and**
**distribution**
</td>
<td>
_-_
</td> </tr>
<tr>
<td>
**Embargo periods (if any)**
</td>
<td>
_-_
</td> </tr> </table>
## Data as used for evaluation in FORTIKA
The visualization of threats is expected to play a significant role in the
evaluation of FORTIKA. CERTH is contributing to the visualization of anomaly
detection and clustering of network entities. Visualization tools have been
developed for visual detection of anomalies regarding messages sent between
mobile phones, and for tracking frequently referenced key words on Twitter.
Visual clustering of mobile phones according to desired characteristics, such
as which mobile phones send messages in a similar way (similar frequency), is
achieved by using graph-based multimodal visualization methodologies, either
based on k-partite graphs or minimum spanning trees. The number of sent
messages through time can be also visualized, in order to track anomalies such
as increased number of sent messages during the night. The visualizations are
backed by outlier detection and clustering methods. This tool can contribute
significantly to the visualization of anomalies in various types of networks.
The above tool has also been adapted for the visualization of frequently
referenced words on Twitter. By tracking frequently referenced words, possible
threads can be detected, thus the adapted tool can be also used for the
visualization of anomaly detection of networks.
Adaptations of the aforementioned tools will be performed in order to
visualize other types of threats as well.
## Self-audit process
Herein the self-audit process and the way it is expected to be implemented in
FORTIKA project are discussed. The process that every dataset in FORTIKA will
follow to assess the compliance of the DMP is shown in Figure 2. The initial
steps concern to the concepts around the data sets that will be generated,
registered and published in FORTIKA. In this self-audit process the definition
and any practical pre-validation, post validation, certification and selfaudit
will be defined as part of the Deployment Site Strategy but will be aligned
with the FORTIKA project overall strategy. Self-audit activities will be
described and the result of their implementation as part of Deployment site
quality control measurements. Figure 2 is used to represent and describe the
logic flow of the process for self-audits in FORTIKA.
**Figure 2: FORTIKA’s Self-Audit Flow Diagram.**
The FORTIKA project has a selected group of experts acting as steward of the
project, who will also support the activity and execution of self-audits to
the data that will be collected, managed and provided at the different
deployment sites. FORTIKA self-audit process will aim at effectively managing
data and identifying its potential, conditions and value of its data assets.
Conducting a data self-audit will provide valuable information, raising
awareness of collection strengths and data issues to improve overall strategy.
A data self-audit will highlight duplication of effort and areas that require
additional investment, allowing putting its resources to best use. It is most
importantly for highlighting inadequacies in data creation and curation
practices, suggesting policy change to lessen the risks. An organisation that
is knowledgeable about its data puts itself in a position to maximize the
value of its collections through continued use. The implementation of a self-
audit methodology envisages to bring benefits like the _prioritisation of
data_ _resources which leads to efficiency savings and the_ _ability to manage
risks associated with data loss and irretrievability_ _._
The self-audit process in FORTIKA follows an effective way to guarantee
consistency in the data, a controlled process and the best methodologies at
the different FORTIKA levels, i.e. Device, Platform and Application. The
following is the list of additional activities that are overseen in order to
execute the self-audit process:
* Planning
* Plan and Set-up the Self-Audit
* Collect Relevant Documents
* Identification, Classification and Assessment of Datasets
* Analyse Documents
* Identify Data Sets
* Classify Data Sets
* Assess Data Sets
* Report of Results and Recommendations
* Collate and analyse information from the audit
* Report on the compliance with the Data Management Plan
* Identify weaknesses and decide on corrective actions
**Figure 3: FORTIKA’s Self-Audit Process.**
## Considerations for Linked data
The term Linked Data refers to a set of best practices for publishing and
interlinking structured data on the Web. These best practices were introduced
by Tim Berners-Lee in his Web architecture note Linked Data [1], and have
become known as the Linked Data principles. Such practices may be proved
useful during the development of FORTIKA.
Until recently the landscape of data on the Web was comprised of a plethora of
selfcontained data repositories. Basically, each Web application or platform
maintained its own repository, even if there was a significant overlap between
these datasets and data that was publicly accessible. From a knowledge and
information retrieval perspective, however, the integration of different kinds
of data sources yields significant added value. The use of different formats
and different technologies made such an integration challenging in the past.
These challenges spurred the development and success of the concept of Linked
Data. This concept describes a method of publishing not only documents but
also all kinds of structured data so that it can be interlinked and become
more useful. It builds upon standard Web technologies such as HTTP and URIs
(Uniform Resource Identifier), but rather than using them to serve web pages
for human readers, it extends them to share information in a way suitable for
reading them automatically by computers. This enables data from different
sources to be connected and queried. These principles listed below: **URIs
and name identification**
This principle advocates using URI references to all things, i.e. extending
the scope of the Web from online resources to encompass any object or concept
in the world. Thus, things are not just Web documents and digital content, but
also real world objects and abstract concepts. These may include tangible
things such as people, places and cars, or those that are more abstract, such
as the relationship type of knowing somebody, the set of all green cars in the
world, or the colour green itself.
To publish data on the Web, the things need to be uniquely identified. As
Linked Data builds directly on the Web architecture, the Web architecture term
resource is used to refer to these things of interest, which are, in turn,
identified by HTTP URIs. Linked Data uses only
HTTP URIs, avoiding other URI schemes such as Uniform Resource Names ( URN)
and Digital Object Identifier ( DOI) . The benefits of HTTP URIs are: (a)
they provide a simple way to create globally unique names in a decentralized
fashion, and (b) they serve not just as a name but also as a means of
accessing information describing the identified entity. **HTTP and URIs**
The HTTP protocol is the Web’s universal access mechanism. In the classic Web,
HTTP URIs are used to combine globally unique identification with a simple,
well-understood retrieval mechanism. Thus, this Linked Data principle
advocates the use of HTTP URIs to identify objects and abstract concepts,
enabling these URIs to be dereferenced (i.e., looked up) over the HTTP
protocol to obtain a description of the identified object or concept. As a
result, any HTTP client can look up the URI using the HTTP protocol and
retrieve a description of the resource that is identified by the URI. This
applies to URIs that are used to identify classic HTML documents, as well as
URIs that are used in the Linked Data context to identify realworld objects
and abstract concepts.
* **Including links to other URIs**
In order to enable a wide range of different applications to process Web
content, it is important to agree on standardised content formats. The
agreement on HTML as a dominant document format was an important factor that
made the Web scale. The third Linked Data principle therefore advocates use of
a single data model for publishing structured data on the Web – the Resource
Description Framework (RDF).
RDF provides a graph-based data model that is extremely simple on the one hand
but strictly tailored towards Web architecture on the other hand. RDF itself
is just describing the data model, it does not address the format in which the
data is eventually stored and transferred. To be published on the Web, RDF
data can be serialised in different formats. The two RDF serialisation formats
most commonly used to publish Linked Data on the Web are RDF/XML and RDFa.
* **Querying linked data**
_SPARQL_ (SPARQL Protocol and RDF Query Language) is the most popular query
language to retrieve and manipulate data stored in RDF, and it became an
official W3C Recommendation in 2008. Depending on the purpose, SPARQL
distinguishes the following for query variations:
* SELECT query: extraction of (raw) information from the data
* CONSTRUCT query: extraction of information and transformation into RDF
* ASK query: extraction of information resulting in a True/False answer
* DESCRIBE query: extraction of RDF graph that describes the resources found
Given that RDF forms a directed, labelled graph for representing information,
the most basic construct of a SPARQL query is a so-called basic graph pattern.
Such a pattern is very similar to a RDF triple with the exception that the
subject, predicate or object may be a variable. A basic graph pattern matches
a sub graph of the RDF data when RDF terms from that sub graph may be
substituted for the variables and the result is RDF graph equivalent to the
sub graph. Using the same identifier for variables also allow combining
multiple graph patterns.
### Storage, maintenance & Sharing of the data
Datasets will be released as agreed between partners, anonymised and in
accordance to GDPR, when available.
# Conclusion
The Data Management Life Cycle (DMLC) of the FORTIKA project has been
designed, introduced and described within this document following a data
centric approach. It describes all the process for data creation, data
storage, data processing, data security, across the data value chain from
Deployment Sites operations to making data openly accessible.
The purpose of the FORTIKA DMP is to support the data management life cycle
for all data that will be generated, collected, and processed by the FORTIKA
project in the different deployment sites and in the FORTIKA system.
This deliverable renders the 1 st version of the Data Management Plan and it
will remain a living document during the whole lifespan of the FORTIKA
project. This document is expected to further mature during the phase of
architecture/building FORTIKA and when the requirements collection phase will
be finalized. A 2 nd version will be distributed was part of the 3 rd PMR,
while the final version of the DMP in M36 shall include the update(s)
concerning the outcomes of above mentioned activities.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1261_COMPACT_740712.md
|
# 1\. Introduction
This deliverable will set out the second version of the data management plan
(DMP) for the COMPACT project. A DMP is a key element of good data management,
which is especially important in the COMPACT context, as all Horizon
2020-funded projects from 2017 onward are required to contain a DMP. 1
Alongside open access publications, open access research data contributes to
achieving open science. 2
The DMP is applicable to data, needed to validate the results presented in
scientific publications. It is part of the European Commission’s Open Research
Data (ORD) Pilot, which was launched as a general project requirement in 2017.
According to the Commission’s website, the pilot aims to ‘improve and maximise
access to and re-use of research data generated by Horizon 2020’. It balances
between openness and protection of scientific information, commercialisation
and intellectual property rights (IPRs), as well as privacy and security
concerns. 3
A previous version of this document, entitled D2.6 Data management plan (v1),
was drawn up in M6 of the project (October 2017), and set out the basic
principles of data management in COMPACT. The current report has been updated
to reflect the advances and current status of data management in the project.
Following the reviewers’ comments, this document was revised in May 2019\. Its
follow-up version, the final iteration of the DMP (D2.9), will nevertheless be
provided alongside this document in order to reflect the final strategy for
data management in COMPACT.
The DMP for COMPACT is based on the European Commission’s Guidelines on FAIR
Data Management in Horizon 2020 4 and the COMPACT Grant Agreement. 5 It
defines which data will be open by detailing the types of data generated by
the project, its accessibility for verification and re-use, exploitation, as
well as its curation and preservation.
In order to implement the open research data principle, the DMP sets out the
following information:
* The handling of research data during and after the end of the project,
* What data will be collected, processed and/or generated,
* Which methodology and standards will be applied,
* Whether data will be shared/made open access and
* How data will be curated and preserved (including after the end of the project).
Sections 2 to 6 of this document will cover the different DMP components,
based on the outline suggested in the Guidelines. They are based on input from
the following partners: AIT, CINI, INOV and KUL, as indicated in the relevant
sections.
# 2\. Data summary
## 2.1. AIT
**What is the purpose of the data collection/generation and its relation to
the objectives of the project?**
The data collection within COMPACT is part of the psychological studies, the
humanfactor profiling and the awareness methods. The purpose of data
collection is scientific, i.e. contributing to an understanding of workers’
security-related behaviour and of how it can be influenced.
In particular, the psychological studies and the human factor profiling survey
measure knowledge about cyber security, taking into account the work-related
variables (i.e., time pressure, decision-making autonomy), IT-skills,
organizational variables (i.e., security climate) and demographic information
(i.e. gender, age, tenure). In addition, we collect qualitative data based on
interviews and observations when applying and evaluating the developed
awareness methods Investigators Diary and Sectopia.
Analyses are made based on aggregated (not individual) data. This data will be
published and then made available through open access.
**What types and formats of data will the project generate/collect?**
The data collected are responses to an online survey (Human Factor Profiling)
and logfiles of the developed game (Sectopia).
**Will you re-use any existing data and how?**
Data will be used for scientific publications. For that purpose, data will be
anonymized and analysed in an aggregated way.
**What is the origin of the data?**
Data is empirical data collected by applying research methods like
questionnaires, interviews, or observations.
**What is the expected size of the data?**
The expected size of the data depends on the number of participants. Overall
there will be less than 500 employees taking part in the studies.
**To whom might it be useful ('data utility')?**
This data is useful as it has potential to increase the understanding of
security-related behaviour within the scientific community. As a consequence,
when this scientific knowledge is transferred into public (within the project,
as well as) this will support making organisations more resilient against
cybercrime.
## 2.2. CINI
**What is the purpose of the data collection/generation and its relation to
the objectives of the project?**
Within the COMPACT Project, CINI is in charge of the development of an
advanced Security Information and Event Management (SIEM) system endowing
LPAs’ organisation with real-time monitoring capabilities. SIEM services
receive log files, which represent records of events that are occurring in an
organization’s system and network when a user attempts to authenticate him- or
herself in order to access the system or a system task is executed (such as
starting a service or shutting down the system, etc.). The content/records of
these log files, which are related to computer security information, will then
be analysed and used to investigate malicious activities. A particular alarm
or event is generated in relation to the particular detected attack.
**What types and formats of data will the project generate/collect?**
SIEM systems come with multiple adapters, which receive data/events from
different sources, such as Operating System (OS) log files (in proprietary or
open formats) or Commercial Off The Shelf (COTS) products for logical and
physical security monitoring, including: Windows registries, Wireshark,
Nessus, Nikto, Snort, Ossec, Argus, Cain & Abel, OpenNMS, Nagios, CENTEROS,
Ganglia, Milestone, openHAB, IDenticard, FieldAware, and CIMPLICITY.
The format of the generated data is JSON and it is stored in an Elasticsearch
analytics engine.
**Will you re-use any existing data and how?**
Currently, existing LPA data is not used to develop/test the SIEM. However, if
needed, we plan to use existing anonymized data stored into the archives of
some LPAs that are involved in the project.
**What is the origin of the data?**
The data collected and analysed by the SIEM system originates from testing
activities carried out at pilot sites (LPAs involved within the project) and
will rely on the collection and analysis of log files of the LPAs
participating in the COMPACT project.
**What is the expected size of the data?**
At this stage of the project, it is not possible to predict the amount of the
data that will be processed and stored, because it highly varies depending on
the number, type and frequency of the monitored data sources.
**To whom might it be useful ('data utility')?**
The data collected by the SIEM system can be useful to other technical
partners that are in charge of the development of COMPACT tools and services,
such as a risk assessment tool or personalization of training courses for
LPAs’ employees, etc.. The data could be useful to other research groups
working on similar research, as well as for testing alternative SIEM
solutions.
## 2.3. INOV
**What is the purpose of the data collection/generation and its relation to
the objectives of the project?**
During the COMPACT project, INOV will collect data to test and demonstrate its
Business process intrusion detection system (BP-IDS). This data collection is
related with the project objective “SO3: Lower the entry barrier to timely
detection and reaction to cyberthreats”, and may occur during the tasks: “Task
4.3 Threat intelligence and monitoring Component”; “Task 4.5 Integration of
solutions in a unified platform”; “Task 5.1 Validation and Demonstration
scenarios”; “Task 5.2 Trials Setup”; and “Task 5.3 Pilot execution and
demonstration”.
**What types and formats of data will the project generate/collect?**
It is not definitive in this phase, since the data to be collected is still
being defined. However, it is foreseen that INOV technology will at least
collect three types of information: documentation, operational data; and
statistical data. The documentation will be collected before the trials, and
composed of data produced by CMA that explains the business processes in place
in their municipality. The operational data will be collected during the
trials by monitoring the interactions between the IT systems present in the
CMA IT infrastructure and its database, but it will only be processed within
the IT infrastructure of CMA. While, the statistical data will be collected by
INOV after the trials to evaluate the performance of BP-IDS during the trials.
**Will you re-use any existing data and how?**
From the three datasets that have been identified only the documentation will
be reused based on what was provided by CMA during the trials.
Datasets containing documentation and statistical data collected during trials
will be reused in scientific publications. Specifically, documentation reused
in these publications was obtained from COMPACT public deliverables (D5.1
Validation and Demonstration
scenarios, D5.2 – Trial setup, and D5.3 - Pilot execution and demonstration
report), and contains business processes descriptions employed in the
municipality and a testbed description identical to what was used for the
experiments detailed in the trials execution and demonstration report.
Moreover, in terms of statistical data, although presented as aggregated form
in D5.3 - Pilot execution and demonstration report, the scientific
publications use the same statistical data that supports the analysis
presented in D5.3 - Pilot execution and demonstration report.
**What is the origin of the data?**
It has not been completely defined, the datasets need to be specified first to
respond to this question. However, at this stage the three datasets chosen
will be collected from the CMA. Specifically, the documentation will be based
on the archive documents of CMA, the operational data based on the data
produced on this LPA’s computers, and the statistical data will be produced
based on the deployments of BP-IDS tool in the IT infrastructure of CMA.
**What is the expected size of the data?**
It is difficult to estimate the size of the data at this stage, because the
size of the data highly varies on the network protocols or the files
monitored.
**To whom might it be useful ('data utility')?**
The principal beneficiary of the operational data will be CMA that will use
the tools developed to monitor threats against their IT infrastructure. INOV
will use it for adapting BP-IDS for LPAs, more specifically to the IT
infrastructure of CMA. The operational data could also be useful to technical
partners of the COMPACT project, that require live data to adapt their
technical solutions to LPA environments. Operational data, that can be most
useful by third parties for their own research interests (e.g., validation
scenarios for GDPR enforcement tools), may contain personal data and is kept
within CMA IT infrastructure with no foreseen methods to make data available.
The statistical data collected by INOV, will be evaluated, and if considered
of interest to the research community (mainly in the scope of open access to
validate INOV scientific results) will be made available by INOV (after
receiving CMA authorization to make data available).
# 3\. FAIR data
Under Horizon 2020’s principle of open access to data, research data must be
FAIR: findable, accessible, interoperable and reusable. This will contribute
to the use of data in future research. 6
In order to be **Findable** :
* F1. (meta)data are assigned a globally unique and eternally persistent identifier.
* F2. data are described with rich metadata.
* F3. (meta)data are registered or indexed in a searchable resource.
* F4. metadata specify the data identifier.
In order to be **Accessible** :
* A.1. (meta)data are retrievable by their identifier using a standardized communications protocol.
* A1.1. the protocol is open, free, and universally implementable.
* A1.2. the protocol allows for an authentication and authorization procedure, where necessary.
* A2. metadata are accessible, even when the data are no longer available.
In order to be **Interoperable** :
* I1. (meta)data use a formal, accessible, shared, and broadly applicable language for knowledge representation.
* I2. (meta)data use vocabularies that follow FAIR principles.
* I3. (meta)data include qualified references to other (meta)data.
In order to be **Re-usable** :
* R1. meta(data) have a plurality of accurate and relevant attributes.
* R1.1. (meta)data are released with a clear and accessible data usage license.
* R1.2. (meta)data are associated with their provenance.
* R1.3. (meta)data meet domain-relevant community standards.
Answering the following questions will contribute towards compliance with the
FAIR data standards. The answers are provided in a comprehensive manner, not
on a yes/no basis.
## 3.1. Making data findable, including provisions for metadata
### 3.1.1. AIT
**Are the data produced and/or used in the project discoverable with metadata,
identifiable and locatable by means of a standard identification mechanism
(e.g.**
**persistent and unique identifiers such as Digital Object Identifiers)?**
No.
**What naming conventions do you follow?**
Not applicable.
**Will search keywords be provided that optimize possibilities for re-use?**
No.
**Do you provide clear version numbers?**
No.
**What metadata will be created? In case metadata standards do not exist in
your discipline, please outline what type of metadata will be created and
how.**
No metadata will be created. The data created within the psychological studies
are aggregated statistical data (e.g., means, standard deviations) as well as
aggregated qualitative data (e.g., interpreted data from sources like
interviews or observations) about employees’ perceptions of security-related
behaviour.
### 3.1.2. CINI
**Are the data produced and/or used in the project discoverable with metadata,
identifiable and locatable by means of a standard identification mechanism
(e.g.**
**persistent and unique identifiers such as Digital Object Identifiers)?**
Not defined yet. However, if the Zenodo Repository will be adopted as a
platform for data storage and sharing, the data will be persistently
identified through DOIs. Anyway, further actions will be performed to generate
metadata for making data accessible.
**What naming conventions do you follow?**
Data will be provided with a timestamp and an identifier. The identifier will
be either the “queryname” in case of aggregated monitoring results, or the
“source” name in case of raw data. To describe the data and their meaning, we
refer to the “Glossary of Key Information Security Terms” provided by NIST 7
or, the SANS Glossary of Security Terms 8 .
**Will search keywords be provided that optimize possibilities for re-use?**
At this stage, we have not planned yet to provide keywords that optimize
possibilities for re-use.
**Do you provide clear version numbers?**
Not defined yet. It can happen that the gathered data will not be modified
after their collection, so only one version number will be provided.
**What metadata will be created? In case metadata standards do not exist in
your discipline, please outline what type of metadata will be created and
how.**
Not defined yet.
### 3.1.3. INOV
**Are the data produced and/or used in the project discoverable with metadata,
identifiable and locatable by means of a standard identification mechanism
(e.g. persistent and unique identifiers such as Digital Object Identifiers)?**
The data collected so far does not contain any metadata.
**What naming conventions do you follow?**
Not defined yet. INOV does not follow naming conventions for the data
collection. Thus, additional measures to convert data are necessary to make
data compatible with the naming conventions chosen.
**Will search keywords be provided that optimize possibilities for re-use?**
Not defined yet. The data collected is not structured. Thus, additional
measures to convert data are necessary to make data searchable through
keywords.
**Do you provide clear version numbers?**
Not defined yet. It is most likely that most of the data collected will not be
altered and that only one version will be created per data.
**What metadata will be created? In case metadata standards do not exist in
your discipline, please outline what type of metadata will be created and
how.**
Not defined yet, but due to the nature of the data collected the creation of
metadata will be very unlikely.
## 3.2. Making data openly accessible
### 3.2.1. AIT
**Which data produced and/or used in the project will be made openly available
as the default? If certain datasets cannot be shared (or need to be shared
under restrictions), explain why, clearly separating legal and contractual
reasons from voluntary restrictions.** Note that in multi-beneficiary projects
it is also possible for specific beneficiaries to keep their data closed if
relevant provisions are made in the consortium agreement and are in line with
the reasons for opting out. There has been no opt-out in the COMPACT project
yet.
The data collected within the project will be anonymised, thus it cannot be
linked to a specific person. All scientific publications will be made open
access and thus openly available.
**How will the data be made accessible (e.g. by deposition in a repository)?**
All scientific articles will be open access.
**What methods or software tools are needed to access the data?**
The data is stored as numeric values or strings. Data will be provided as csv
or Excel files.
**Is documentation about the software needed to access the data included?**
No, this is not needed.
**Is it possible to include the relevant software (e.g. in open source
code)?**
There is no need for that.
**Where will the data and associated metadata, documentation and code be
deposited? Preference should be given to certified repositories which support
open access where possible.**
All scientific articles will be open access.
**Have you explored appropriate arrangements with the identified repository?**
Not applicable.
**If there are restrictions on use, how will access be provided?**
There are no restrictions on use.
**Is there a need for a data access committee?**
No, there is no need for that.
**Are there well described conditions for access (i.e. a machine-readable
license)?**
There is no need for that.
**How will the identity of the person accessing the data be ascertained?**
The data collected within the project will be anonymised, thus it cannot be
linked to a specific person.
### 3.2.2. CINI
**Which data produced and/or used in the project will be made openly available
as the default? If certain datasets cannot be shared (or need to be shared
under restrictions), explain why, clearly separating legal and contractual
reasons from voluntary restrictions.** Note that in multi-beneficiary projects
it is also possible for specific beneficiaries to keep their data closed if
relevant provisions are made in the consortium agreement and are in line with
the reasons for opting out. There has been no opt-out in the COMPACT project
yet.
The data produced and/or used in the project will be made openly available by
default only after a pseudonymisation and/or anonymization process in order to
prevent from attributing the data to a specific person. This is the case, for
example, of the data regarding the USB usage in the CDA scenario.
**How will the data be made accessible (e.g. by deposition in a repository)?**
The data will be made accessible through a research data repository. The
consortium will take measures to enable third parties to access, mine,
exploit, reproduce, and disseminate the data free of charge.
**What methods or software tools are needed to access the data?**
<table>
<tr>
<th>
The best candidate tool for data sharing – at the time of this writing – is
ZENODO, an OpenAIRE/CERN compliant repository. Zenodo builds and operates a
simple and innovative service that enables researchers, scientists, EU
projects and institutions to share, preserve and showcase multidisciplinary
research results (data and publications), that are not part of the existing
institutional or subject-based repositories of the research communities.
Zenodo enables researchers, scientists, EU projects and institutions to:
• easily share the long tail of small research results in a wide variety of
formats, including text, spreadsheets, audio, video, and images across all
fields of science.
</th> </tr>
<tr>
<td>
•
</td>
<td>
display the research results and receive credit by making the research results
citable and integrating them into existing reporting lines to funding agencies
like the European Commission.
</td> </tr>
<tr>
<td>
•
</td>
<td>
easily access and reuse shared research results.
</td> </tr> </table>
**Is documentation about the software needed to access the data included?**
Yes, it is.
**Is it possible to include the relevant software (e.g. in open source
code)?**
It is possible, but not decided yet if open source code will be included.
**Where will the data and associated metadata, documentation and code be
deposited? Preference should be given to certified repositories which support
open access where possible.**
The consortium plans to deposit data in an OpenAIRE compliant research data
repository.
**Have you explored appropriate arrangements with the identified repository?**
If the consortium selects Zenodo as sharing and storage repository, then the
arrangements have been already identified and are described in the Zenodo
Terms Of Use. More details are available here:
_http://about.zenodo.org/terms/_
**If there are restrictions on use, how will access be provided?**
If the consortium selects Zenodo as sharing and storage repository, files may
be deposited under closed, open, or embargoed access. Files deposited under
closed access are protected against unauthorized access at all levels.
Contents deposited under an embargo status will be managed by the repository
that will restrict access to the data
until the end of the embargo period; at which time, the content will become
publically available automatically. More information is available here:
_http://about.zenodo.org/policies/_
**Is there a need for a data access committee?**
Not defined yet.
**Are there well described conditions for access (i.e. a machine-readable
license)?**
If the consortium selects Zenodo as sharing and storage repository, then this
information is available here: _http://about.zenodo.org/policies/_
**How will the identity of the person accessing the data be ascertained?**
If the consortium selects Zenodo as sharing and storage repository, then
access to Zenodo’s content will be open to all, for non-military purposes
only. More information is available here: _http://about.zenodo.org/terms/_ .
### 3.2.3. INOV
**Which data produced and/or used in the project will be made openly available
as the default? If certain datasets cannot be shared (or need to be shared
under restrictions), explain why, clearly separating legal and contractual
reasons from voluntary restrictions.** Note that in multi-beneficiary projects
it is also possible for specific beneficiaries to keep their data closed if
relevant provisions are made in the consortium agreement and are in line with
the reasons for opting out. There has been no opt-out in the COMPACT project
yet.
Not defined yet. However, since making data public may reveal CMA’s business
secrets (like detailed business processes specification, operative data with
personal data or sensitive information), data will be made available to a
restricted number of personnel only to avoid any risk to the detriment of CMA.
**How will the data be made accessible (e.g. by deposition in a repository)?**
Not defined yet. The data is stored and only accessible in computers located
on CMA headquarters. INOV will only have access to the information by using
the tool in closed premises.
**What methods or software tools are needed to access the data?**
To access the data, INOV will access the information when visiting
headquarters of CMA, or remotely by accessing through the VPN made available
by CMA. Either way, the information collected will remain in CMA’s premises.
**Is documentation about the software needed to access the data included?**
Not defined yet, but most likely it will not be available since it may
undermine CMA’s security methodology.
**Is it possible to include the relevant software (e.g. in open source
code)?**
Not defined yet.
**Where will the data and associated metadata, documentation and code be
deposited? Preference should be given to certified repositories which support
open access where possible.**
Not defined yet.
**Have you explored appropriate arrangements with the identified repository?**
Not defined yet.
**If there are restrictions on use, how will access be provided?**
Not defined yet.
**Is there a need for a data access committee?**
Not defined yet.
**Are there well described conditions for access (i.e. a machine-readable
license)?**
Not defined yet.
**How will the identity of the person accessing the data be ascertained?**
Not defined yet.
## 3.3. Making data interoperable
### 3.3.1. AIT
**Are the data produced in the project interoperable, that is allowing data
exchange and re-use between researchers, institutions, organisations,
countries, etc. (i.e. adhering to standards for formats, as much as possible
compliant with available (open) software applications, and in particular
facilitating re-combinations with different datasets from different
origins)?**
As the produced data is based on specific empirical methods, it is
interoperable with further data stemming from the same methods. If a
scientific article is published and made open access, we can provide the data
for other researchers. The data can be provided within .xls or .csv files.
**What data and metadata vocabularies, standards or methodologies will you
follow to make your data interoperable?**
Not defined yet. That depends on the requirements of the journal.
**Will you be using standard vocabularies for all data types present in your
data set, to allow inter-disciplinary interoperability?**
Not defined yet.
**In case it is unavoidable that you use uncommon or generate project specific
ontologies or vocabularies, will you provide mappings to more commonly used
ontologies?**
Not applicable.
### 3.3.2. CINI
**Are the data produced in the project interoperable, that is allowing data
exchange and re-use between researchers, institutions, organisations,
countries, etc. (i.e. adhering to standards for formats, as much as possible
compliant with available (open) software applications, and in particular
facilitating re-combinations with different datasets from different
origins)?**
Yes.
**What data and metadata vocabularies, standards or methodologies will you
follow to make your data interoperable?**
The data format used to represent the data is JSON, a lightweight data-
interchange format supported by all modern programming languages.
**Will you be using standard vocabularies for all data types present in your
data set, to allow inter-disciplinary interoperability?**
Not defined yet.
**In case it is unavoidable that you use uncommon or generate project specific
ontologies or vocabularies, will you provide mappings to more commonly used
ontologies?**
Not defined yet.
### 3.3.3. INOV
**Are the data produced in the project interoperable, that is allowing data
exchange and re-use between researchers, institutions, organisations,
countries, etc. (i.e. adhering to standards for formats, as much as possible
compliant with available (open) software applications, and in particular
facilitating re-combinations with different datasets from different
origins)?**
No, most of the data collected is raw data with the formats originally
produced by CMA software monitored by BP-IDS during COMPACT trials. Also,
since the data is only being used throughout the project to validate BP-IDS,
the dataset collected is stored on database that follows a table schema
created specifically for BP-IDS.
**What data and metadata vocabularies, standards or methodologies will you
follow to make your data interoperable?**
Not defined yet.
**Will you be using standard vocabularies for all data types present in your
data set, to allow inter-disciplinary interoperability?**
Not defined yet.
**In case it is unavoidable that you use uncommon or generate project specific
ontologies or vocabularies, will you provide mappings to more commonly used
ontologies?**
Not defined yet.
## 3.4. Increase data re-use (through clarifying licenses)
### 3.4.1. AIT
**How will the data be licensed to permit the widest re-use possible?**
The re-use possibilities depend on the journal that will be chosen for
submission (preference would be co-submission the data alongside to the
manuscript that has been submitted).
**When will the data be made available for re-use? If an embargo is sought to
give time to publish or seek patents, specify why and how long this will
apply, bearing in mind that research data should be made available as soon as
possible.**
The scientific data will be shared when an article is accepted and published
in a scientific journal.
**Are the data produced and/or used in the project useable by third parties,
in particular after the end of the project? If the re-use of some data is
restricted, explain why.**
Anonymised scientific data that is published in a scientific journal can be
re-used by other researchers for scientific purposes.
**How long is it intended that the data remains re-usable?**
This depends on the policies of the chosen scientific journal.
**Are data quality assurance processes described?**
Not defined yet.
### 3.4.2. CINI
**How will the data be licensed to permit the widest re-use possible?**
If the consortium chooses Zenodo as data sharing platform, the data will be
made available according to the Zenodo Open Access policies.
**When will the data be made available for re-use? If an embargo is sought to
give time to publish or seek patents, specify why and how long this will
apply, bearing in mind that research data should be made available as soon as
possible.**
At this stage of the project, it is not possible to predict when the date will
be made available for re-use.
**Are the data produced and/or used in the project useable by third parties,
in particular after the end of the project? If the re-use of some data is
restricted, explain why.**
Anonymised data will be usable by third parties.
**How long is it intended that the data remains re-usable?**
We intend to store data and make it re-usable for an appropriate period of
time according to the key guidelines established by the following regulations:
Directive (EU) 2016/1148 of the European Parliament and of the Council of 6
July 2016 concerning measures for a high common level of security of network
and information systems across the Union (NIS Directive)
Regulation (EU) 2016/679 of the European Parliament and of the Council of 27
April 2016 on the protection of natural persons with regard to the processing
of personal data and on the free movement of such data, and repealing
Directive 95/46/EC (General Data Protection Regulation)
**Are data quality assurance processes described?**
Not defined yet.
### 3.4.3. INOV
**How will the data be licensed to permit the widest re-use possible?**
Not defined yet.
**When will the data be made available for re-use? If an embargo is sought to
give time to publish or seek patents, specify why and how long this will
apply, bearing in mind that research data should be made available as soon as
possible.**
Not defined yet.
**Are the data produced and/or used in the project useable by third parties,
in particular after the end of the project? If the re-use of some data is
restricted, explain why.**
Not foreseeable at this stage.
**How long is it intended that the data remains re-usable?**
For the duration of the project.
**Are data quality assurance processes described?**
Not defined yet.
# 4\. Allocation of resources – the whole consortium
According to the Horizon 2020 rules, costs related to open access to research
data are eligible for reimbursement during the duration of the project under
the conditions defined in the COMPACT Grant Agreement, in particular Articles
6 and 6.2.D.3. 9 These are direct costs, related to subcontracting of
project tasks, such as subcontracting the open access to data.
**What are the costs for making data FAIR in your project?**
Within the COMPACT project, AIT has a budget for making accepted scientific
articles open access. The originally planned amount for making data FAIR is
10.000 EUR. Currently, two journal articles have been submitted and will be
funded from the reserved budget if the articles are going to be accepted. CINI
and INOV do not foresee any data management costs to make data FAIR.
**How will these be covered? Note that costs related to open access to
research data are eligible as part of the Horizon 2020 grant (if compliant
with the Grant Agreement conditions).**
AIT has reserved the necessary financing for open access costs in its budget
under "other costs".
**Who will be responsible for data management in your project?**
A specific role, namely the Data Protection Officer (DPO), has been created by
the COMPACT project to address data management issues at consortium-level.
Salvatore D’Antonio from CINI has been appointed as DPO and is responsible for
the coordination of the interactions between the consortium and the Data
Protection Authorities (DPAs) located in partners’ countries. The DPO is the
common interface of the COMPACT consortium towards external entities and
bodies dealing with data protection and management. DPO is also in charge of
supporting and coordinating the activities performed by the Data Controllers
that have been appointed by each COMPACT partner. The role of local Data
Controller is to implement the data protection and management policies each
LPA has defined. This means that the storage and management of the data
locally collected is under control and responsibility of each Data Controller.
**Are the resources for long term preservation discussed (costs and potential
value, who decides and how what data will be kept and for how long)?**
Not defined yet.
# 5\. Data security
### 5.1.1. AIT
**What provisions are in place for data security (including data recovery as
well as secure storage and transfer of sensitive data)?**
The data is only stored internally within AIT facilities, which provide state
of the art IT security. State of the art IT security measures and company
policies mitigate most of the risk of an illegitimate access. Firewalls (to
prevent illegitimate access from outside) and a rights-based-file system (to
prevent illegitimate access from inside) are the countermeasures against this
risk.
**Is the data safely stored in certified repositories for long term
preservation and curation?**
The data is only stored internally within AIT facilities, which provide state
of the art IT security. If the scientific data will be published, it will be
made open access. Again, this depends on the journal.
### 5.1.2. CINI
**What provisions are in place for data security (including data recovery as
well as secure storage and transfer of sensitive data)?**
Regarding security, all of the data collected will be stored into a database
only accessible to authenticated users on the partner’s premises. Regarding
data recovery, database backups will be stored on premises and only accessible
to CINI. Sensitive data will never be transferred outside LPA premises, unless
it has been previously anonymised.
**Is the data safely stored in certified repositories for long term
preservation and curation?**
It is not definitive in this phase.
### 5.1.3. INOV
**What provisions are in place for data security (including data recovery as
well as secure storage and transfer of sensitive data)?**
Regarding security, all the data collected will be stored on a database only
accessible to authenticated users on the partner premises. Regarding the data
recovery, database backups will be stored on premises and only accessible to
INOV.
**Is the data safely stored in certified repositories for long term
preservation and curation?**
It is not definitive in this phase, but it is not expected to store the
collected data in a repository.
# 6\. Ethical and legal aspects
**Are there any ethical or legal issues that can have an impact on data
sharing? These can also be discussed in the context of the ethics review. If
relevant, include references to ethics deliverables and ethics chapter in the
Description of the Action (DoA).**
Addressing legal and ethics challenges is an important part of the COMPACT
work plan. As already indicated in the Section 5 of the Description of the
Action, special attention has been paid to these issues since the very
beginning of the project. A legal partner (KUL) forms part of the consortium,
handling guidance and providing relevant expertise. Internal ethics controls
in COMPACT include setting up an internal ethics committee and defining
checklists for project compliance, as per Task 1.4. The consortium has also
appointed an ethics and privacy manager.
Moreover, a dedicated work package (WP8) deals specifically with ethics
requirements, such as notifications to competent data protection authorities
(POPD - Requirement No. 4), authorisation for use of non-public data (POPD -
Requirement No. 5) and details on preventing the misuse of research findings
(M - Requirement No. 6). All these requirements have been duly met by relevant
partners.
SELP, or security, ethics, legal and privacy, is one of the building blocks of
setting up COMPACT products. Specific tasks have been allocated to deal with
SELP aspects of COMPACT (T2.5, T3.4), especially research ethics, privacy
rights and data protection regime under the GDPR. Nevertheless, a DMP is on
principle not part of general GDPR compliance due to the latter’s slightly
different scope of application. Namely, the GDPR applies to processing of
personal data. Personal data are defined in Art. 4(1) of the GDPR as any
information relating to an identified or identifiable natural person (‘data
subject’). Open research data, on the other hand, can be any kind of data
resulting from research, whether personal, pseudonymised (which are still
personal data), anonymised formerly personal data, but also data from
(chemical) lab trials, industrial data or any kind of data that have no
connection to an individual person, thus falling wholly outside the scope of
the GDPR.
However, should personal data be used as part of the DMP, they will be
anonymised, or if that is not possible, they will be pseudonymised according
to the current state of the art, and the additional information necessary for
re-identifying the individual will be kept separately (according to Art. 4(5)
of the GDPR). Pseudonymisation is a permissible measure for data protection in
research, according to Art. 89; nevertheless, if possible, further
identification should not be possible. Moreover, for personal data processed
in the context of COMPACT trials, relevant controller-processor agreements for
data sharing will be concluded between end users and technology partners.
**Is informed consent for data sharing and long term preservation included in
questionnaires dealing with personal data?**
<table>
<tr>
<th>
Informed consent forms & information sheets – updated to reflect the GDPR
requirements
The internal ethics committee has provided COMPACT partners with informed
consent forms and information sheets, updated to reflect the new GDPR
requirements. The information requirements are laid down in Art. 13 and 14.
Accordingly, the information sheets give research participants information
about, inter alia:
* Purposes of data collection, data processing and data analysis
* Types of personal data processed
* Transfer of their personal data between their employer/LPA and the relevant technical partner(s), involved in the trials
* The rights they have as data subjects, and information on how to exercise them - The period for which the data will be stored
The participants will receive the information sheet together with informed
consent forms before they start trials. They have the right to withdraw from
research at any time without any adverse consequences.
</th> </tr> </table>
# 7\. Other
**Do you make use of other national/funder/sectorial/departmental procedures
for data management? If yes, which ones?**
Not defined yet.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1262_OSOS_741572.md
|
# 1 Data Summary
## 1.1 Purpose of the data collection/generation and relation to OSOS project
The main goal of the OSOS project is to **develop a framework that could
facilitate the transformation of schools to Open Schooling Hubs.** An Open
Schooling Hub will be an open, curious, welcoming, democratic environment
which will support the development of innovative and creative projects and
educational activities. It will provide a powerful framework for school
leaders to engage, discuss and explore how schools will facilitate open, more
effective and efficient co-design, co-creation, and use of educational
content. Towards this objective, the consortium will develop the **Open
Schooling Roadmap** to support schools to reflect on, plan and undertake
changes in education for 21st Century learning. The role of the Open Schooling
hubs will be varied with tasks expanding from proposing school projects,
adapting them in order to fit the Open Schooling approach of the project, to
providing guidance and reflective feedback. A devoted social platform will be
developed (as part of the Inspiring Science Education infrastructure) to
support the process, to facilitate the sharing of ideas and project and to map
the schools’ development.
A **series of exemplary cases across Europe (and beyond) will be indentified
and piloted in a core group of schools that will act as Open Schooling Hubs.
A** pool of cases will thus be created, based on whole-school projects and
initiatives from science centres and museums or research centres that promote
creative problem solving, discovery, learning by doing, experiential learning,
critical thinking and creativity, simulating the real scientific work. These
activities include use of data analysis tools, access to unique resources, to
animations and simulations of physical processes and experiments, educational
games and projects presenting the basic concepts of science, and more. Based
on the Open Schooling framework (WP2) these educational cases will be enriched
and expanded taking account of (and utilizing) every student’s extended
learning relationships (peer-peer, student-teacher, involving parents or
external mentors or businesses), so that learning is something that can happen
at any time, in any place, and with a wider range of coaches, science
communicators, mentors, and experts.
The initial core group of Open Schooling Hubs will be expanded to a large
community of schools that are establishing links with the local communities,
research centers and industry while they are developing common projects to
solve big problems and meet big challenges of our society. Each one of the 100
Open Schooling Hubs will develop a network of at least 9 additional schools to
form a national network of schools where the Open School Culture is
introduced. Overall more than 1,000 schools will be involved in the project in
two implementation phases.
It is obvious that during the lifetime of OSOS, data of different nature will
be generated and collected. These data are user and machine generated, which
means that they may contain sensitive personal information, and thus a clear
plan is required on how they are to be managed, i.e., stored, accessed,
protected against unauthorized or improper use, etc. The development of the
virtual learning community section will be enhanced by the Open Schooling Hub
Community Support Environment that will provide tools for community building
and support. A large set of data will stem from the systematic validation of
the OSOS approach and activities in order to identify their impact in terms of
the effectiveness and efficiency. The proposed validation methodology offers a
framework for validating the introduction of innovation in schools so that
piloting and field testing results can be collated and analyzed systematically
and then disseminated widely, thus ensuring rapid impact and widespread
uptake. The key areas of interest of the proposed validation methodology will
be Science Pedagogy, Organisation issues (e.g. impact on the curriculum),
Technology – tools, services and infrastructure, Economic – value for money,
added value, Cultural and linguistic issues.
The purpose of this Data Management Plan, regarding the OSOS project is to:
* specify the data that will be collected during the activities of OSOS,
* investigate the best practices and guidelines for sharing the project outcomes and facilitating open access to research data, while ensuring compliance with the established ethical and privacy rules, and
* define how the data collected in the project will be made available to third parties.
The DMP needs to be updated over the course of the project whenever
significant changes arise, such as (but not limited to): (i) new data, (ii)
changes in consortium policies (e.g. new innovation potential, decision to
file for a patent), (iii) changes in consortium composition and (iv) external
factors (e.g. new consortium members joining or old members leaving). The DMP
will be updated as a minimum in time with the periodic evaluation/assessment
of the project. If there are no other periodic reviews foreseen within the
grant agreement, then such an update needs to be made in time for the final
review at the latest. Furthermore, the consortium can define a timetable for
review in the DMP itself. 1
## 1.2 Types & Formats of collected/generated data
In this section we are describing the OSOS infrastructure for data collection
and generation and the types of data to be collected during impelemantion
(WP5) and impact assessment studies (WP6).
### 1.2.1 The Connection of OSOS data with ODS and ISE generated data
The OSOS project aims to validate its approach with the very large school
communities who are currently using the services offered by the Open Discovery
Space (ODS) socially empowered portal (http://portal.opendiscoveryspace.eu/)
(main outcome of the major European initiative funded by European Commission's
CIP‐ICT Policy Support Programme) (Athanasiades et al, 2014).
The ODS portal is currently used by **5.000 European Schools** from **20
European Member States.** The use of ODS services (combined with the
functionalities of the Inspiring Science Education (ISE) tools) has resulted
to substantial growth in digital maturity (ematurity) of the participating
schools, even for schools which were considered as e‐mature when they joined
the network. The participating school communities became core nodes of
innovation, involving numerous teachers in sharing educational content and
experiences (Sotiriou et al, 2016).
The **Inspiring Science Education** services allow and guide the teachers to
participate in a dynamically expanded collaborative network of school/
thematic/ national/ international communities. Thus, the participating school
communities became core nodes of innovation, involving numerous teachers in
sharing educational content and experiences (Sotiriou et al, 2016). Schools
that were involved in ODS and ISE large scale initiatives have developed
innovations locally, and while the consortium sought to understand what works
across the innovation programme as a whole.
Based on the principles of creative community involvement and design-based
research, the ODS approach was designed as **a three-step process** , aiming
to stimulate, incubate and accelerate the uptake of innovative eLearning
practices in school communities and national policies. **Figure 1** presents
the ODS innovation approach in its final format.
**As a first step** (following the agreement of the school management) a local
team of teachers is analyzing the school needs and identified areas in which
the school can best demonstrate innovative approaches and projects. At this
level, initial scenarios have been implemented to pioneer futureoriented
practices and to experiment with eLearning resources and technology- enhanced
learning applications. The resulted detailed action plan (for at least one
year) includes targets and milestones toward their achievement. At this phase,
ODS offered a rich database of creative initiatives with access to numerous
resources, guidelines and support (also online through webinars and hangouts)
as well as examples for the coordination of action plans offering funding
opportunities for the realization of the school action plans (e.g., in the
framework of ERASMUS+ program).
**The second step** was aiming to encourage the uptake of resource-based
learning practices and to engage a wider school community (by involving more
teachers in the projects and initiatives) in implementing resource-based
educational scenarios in various curriculum areas, as well as to reflect on
the use of tools, resources and practices through a series of practice
reflection workshops. This phase was supposed to create the steady and
supportive development of new learning techniques and methodologies, leading
to sustained improvement. The development of strong communities of practice
around the implementation scenarios was regarded a crucial element in the
success of proposed interventions. Its focus was not only on the integration
of digital resources into syllabi, but also on subsequent adoption of the
modernization of the school organization, teachers’ professional development.
Localized assessment approaches estimated the impact on both, individuals and
schools as an organization, as well as on the development of effective
cooperation with organizations like universities and research centers,
informal learning centers (e.g., museums), enterprises, industries and the
local communities.
The objective of **the third step** was to accelerate the educational changes
regarded as effective and to expand them to significant parts of the school,
always keeping in mind the school’s main needs (as defined in phase one).
Attention was given to exploiting knowledge management techniques (sharing
what is known within ODS school communities); synthesizing evaluation; and
accelerating diffusion within national agencies (to reach more users).
Insights from ODS’s work on online communities, as well as from synthesizing
school needs, also aimed at supporting the acceleration of diffusion within
national agencies. In the framework of OSOS the project team, using the
extended experience from the large scale pilots over the last years, will
design and implement localized approaches and strategies in different
countries and in the different school settings.
### 1.2.2 Types and format of collected / generated data by direct input
methods
OSOS best practices will act as accelerators of the introduction of OSOS
approach in the participating schools. They will help innovative schools to
proceed more and develop their innovative ideas to new localised projects that
could provide new solutions for the school and its community, for bridging the
gap between formal and informal learning settings and creating new
opportunities for personalisation at different levels (student, teacher,
school).
**Overall, data will be collected and/or generated from the following
sources:**
1. **Questionnaires and other direct input methods** , capturing data from headmasters, teachers, students and external stakeholders (see subsection 1.2.3 for details and D6.1 and D6.2- under preparation to be delivered on M8)
2. **Existing and new data which will be developed during the whole school transformation process** and stored in the OSOS platform. These include content types already existing in the OSOS Infrastructure (inherited from the ODS/ISE portals) but also new content types, such as projects and accelerators (see subsection 1.2.4 for details)
3. **Existing and new data generated by the use of the OSOS platform and collected through shallow and deep analytic tools** (see subsection 1.2.5 for details)
4. **Data collected and produced for dissemination and exploitation of project results** , following the guidelines set at the project’s Grant Agreement 2 : Dissemination of Results — Open Access — Visibility of EU Funding (see subsection 1.2.6 for details)
Based on the OSOS approach, **Figure 2** depicts the datasets that will be
generated (D6.1), collected and used during the full cycle of the school
transformation with the support of the OSOS support mechanism (WP3).
school transformation cycle
29
More specifically, the OSOS Infrastructure will support a series of tools (
**Open Schooling Incubators** ) for the involved practitioners (to develop
their projects, to share their best practices with others, to disseminate
their work) and a series of best practices ( **Open Schooling Accelerators** )
that the participating schools can adopt to their local communities needs in
order to demonstrate their potential to act as core nodes in them. OSOS will
use the **Inspiring Science Education** services to offer numerous tools for
the school communities that will be involved in the project.
### 1.2.3 Types and format of collected / generated during the OSOS
Evaluation Framework
The OSOS team is developing a comprehensive OSOS Evaluation Framework
consisting of a set of measurable quantitative and qualitative indicators and
impact assessment tools to gauge the effectiveness and impact of the OSOS
approach. The Evaluation Framework is also drawing from current interlinking
validation methodologies for RRI in education e.g. EnRRIch, UNESCO global
citizenship goals, ENGAGE2020 and the EC guidelines. OSOS Evaluation Framework
will draw on the following evaluation and validation methodologies:
**Science Pedagogy** (this will consist of quantitative and qualitative
assessment, on continuum scale of ‘mass’, ‘density’, ‘temperature’ and
‘reflectivity’ in our definition of RRI-enhanced science pedagogy, which is
pedagogy involving the following principles: Sparking Interest and Excitement;
Understanding Scientific Content and Knowledge; Engaging in Scientific
Reasoning; Reflecting on Science; Using the Tools and Language of Science;
Identifying with the Scientific Enterprise)
**Organisational Cultures and Culture Change** (quantitative and qualitative
assessment of the teaching institutions, hubs and curricula under the
following criteria: artifacts (visible), espoused beliefs and values (may
appear through surveys) and basic underlying assumptions (Schein, 1985)
**Technology – tools, services and infrastructure** (quantitative and
qualitative assessment of the teaching technology pedagogies and
infrastructures, while not enforcing ‘tech-push’, but to utilise Mark Prensky
‘s (2005) cultures of tech innovation, with indicators of 1) ‘Dabbling’ 2) Old
things/old ways 3) Old things/new ways 4) new things /new ways. This is to
include direct classroom tools, formal and informal processes, community-
building and social media approaches)
**Economic /other value, added value** (using the SMEV model of ‘Socially-
Modified Economic Valuation (Munck et al 2014) a Harvard-devised model which
uses a ‘social weighting’ measuring the social contribution of collaborative
community educational activities.
This value accompanies or ‘shadows’ the actual economic value. Thus,
activities in socially disadvantaged areas would be ‘worth’ more in terms of
social value generated. Measurements can be done on the basis of, as examples:
* number of partnerships between schools, local communities and local industry;
* number of stakeholders involved and interactions;
* structured or flexible interactions: equity of social capital/ social power of stakeholders in the process;
* tools and skills acquired by the stakeholders as a result of open schooling activities;
* tools and skills attachment to pedagogical/ RRI goals.
**Cultural and linguistic issues** quantitative and qualitative assessment of
teaching and community interaction under 1) didaktik, 2) Vygostsky social
learning (Vygotsky, 1978), and communities of practice (Lave and Wenger ,
1991), with particular emphasis on gender and language.
Different methods and techniques are being employed, including a mix of
quantitative and qualitative methods such as document and statistical
analysis, interviews, focus groups, tracking of student interest/progression,
online survey tools etc. To collect quantitative data an evaluation template
with standardized questions and reflection points are being developed. Each
OSOS National Coordinator and pilot hub contact point will populate the
evaluation template and submit quarterly reports. Data with headings to
capture specific information such as the number of industry role models
engaged, number of students engaged with industry, number of partnerships
created etc will be then analysed by the evaluation team.
During evaluation, the main issues to consider include:
* How many partnerships between schools, local communities and local industry have been created as a result of a pilot open schooling hub?
* How many stakeholders were involved and how many interactions took place?
* Were these interactions structured or flexible?
* Were the interactions dominated by any particular stakeholder or was the process flexible allowing for mutual learning two-way knowledge transfer?
* What tools and skills were acquired by the stakeholders as a result of open schooling activities?
* Did these tools and skills contribute to more scientifically interested and literate students and, more generally, society? If yes, how?
Other evaluation techniques and methods to be employed are including:
* tracking the number of institutions that adopted the Open school hub model at staged intervals over the project cycle;
* conducting surveys with random sample of citizens pre and post engagement in the OSOS open school hub and comprehensive assessment of potential changes in attitudes, behavior, knowledge attainment.
The project is also evaluating the potential of OSOS model to integrate more
effectively RRI in OSOS pilot schools and more generally in schools across
Europe. Specifically, it will assess to what extent teachers, students and
other stakeholders engaged through OSOS open schooling approach have a
holistic view of science, scientific research and major scientific
developments. The RRI component of evaluation will include student/teacher
pre-post engagement reflections; integration of RRI principles into school
curricula and teaching practices etc.
These reflections and evaluation of curricula and practices will reveal
changes in awareness/knowledge aspects/behaviour in relation to the RRI
principles - such as gender, ethics, open access, open science, public
engagement, governance, socio-economic development and sustainability, social
issues related to scientific developments. In addition, impact of the OSOS
model on industry partners and non-formal education providers will also be
assessed, in particular whether industry partners incorporated any learnings
into their business processes, corporate social responsibility (CSR) and
public engagement (PE) strategies as a result of the OSOS engagement model.
### 1.2.4 Types and format of collected / generated data by users of the OSOS
Platform
To support the realization of the transformation process, OSOS will deploy an
open learning content infrastructure that aggregates existing repositories and
tools into a critical mass of e-learning contents, covering around 1,000,000
e-learning resources from 75 educational repositories across the globe.
Moreover, OSOS adopts social networking tools and offers the opportunity for
the development of lightweight school-based portals to support the development
of school-generated content, the assessment of the school’s openness level and
its cultural change.
The OSOS infrastructure will be based on the existing services offered by the
Open Discovery Space
(ODS) socially empowered portal (http://portal.opendiscoveryspace.eu/) (main
outcome of the major European initiative funded by European Commission's
CIP‐ICT Policy Support Programme). Apart from community building and support
tools numerous content creation and content delivery tools will be available
for teachers and students to facilitate the creation of their projects.
#### 1.2.4.1 School Competence Assessment Tools
As a crucial tool for assessing the openness level of a school, a self-
evaluation instrument, will be offered to the participating schools head
masters. It will be assessed the use level of the school openness of with an
emphasis on the introduction of the RRI culture in six key areas: (1)
leadership and vision, (2) curriculum and use of external resources, (3) open
school culture, (4) professional development, (5) parental engagement and (6)
resources and infrastructure. Based on the school’s reference data, actionable
analytics will be provided, allowing head teachers and key stakeholders to
monitor the school development and the impact of the proposed innovation
process.
#### 1.2.4.2 School Development Plan Templates
Pilot schools will be asked to cater for a holistic school development plan in
using a provided template. That plans will provide a robust base for
automating and facilitating the task of periodic school selfassessment based
on reliable criteria, such as development of innovative projects and
initiatives, school external collaborations, teachers’ professional
development plans and school portfolios that may also include information on
teacher-generated content, effective parental engagement strategies. The
proposed School Development Plan Template is presented in Appendix I. It will
be used in the framework of the first pilot phase and it will be tested in
about 100 schools in different European countries.
#### 1.2.4.3 Community Building Tools
OSOS project will capitalize on the ODS school communities which currently
involve 5.000 schools from all over Europe. The graph presents the thematic
communities that have been developed by these schools. 0ne can see that the
communities are dominated by science and interdisciplinary projects which can
form a unique space for implementation of the Open Schooling Activities.
Several relations among various communities and other content in the portal
are created. The communities are the places of creation of user generated
content. The communities created by the Teachers are automatically related
also with the School where these Teachers are working to. The level of access
of the Communities define the also the level of their content. The “public”
communities are accessible to all visitors of the portal and the content
follows the restrictions that their creator enforces. The “private”
communities allow access to the content only to their members and as a next
level the restrictions of the creators of the content are applied. Each
community might contain several modules that serves the organization and
promotion of its members’ activities. These modules are Croups, Events,
Discussions, Activities, Blogs and Polls), they follow specific structure in
the portal and they are created by the members of the communities.
#### 1.2.4.4 Advanced Search Mechanisms
The OSOS platform will act as a harvester of educational resources (using the
advanced ODS facilities and search mechanisms), in aggregating targeted
contents from a variety of science-related sources and using the appropriate
search and filtering mechanisms. Users can also search for schools involved in
the project, as well as for thematic communities organized by teachers to
share materials and experiences.
#### 1.2.4.5 Educational Design and Authoring Tools
In order to help teachers to become developers of educational activities and
scenarios a series of simple and more advanced authoring tools will be
available. The authoring tools are promoting the development of projects and
they are adapting the inquiry learning cycle as a core pedagogical model,
allowing always flexibility to the teacher to modify the sequence of the
educational process. In order to facilitate the creation of high-quality
teacher-generated content and scenarios, model templates capturing rather
popular science education approaches (learning cycle, 5E model), as well as
crosscurricular scenarios and lesson plans, were developed as a source of
inspiration for teachers (Sotiriou et al. 2011). Each OSOS community member
will be allowed to customize the sources and even the platform components that
they used to create, search and curate content. An advanced authoring tool
will be developed to facilitate the creation of the students’ projects. The
aim is to help them to become creators of educational activities which will
reflect on the real educational needs of their classrooms as well as they are
providing solutions to their local communities. This includes the following
four-step process (see **Figure 3)** :
* Feel: Students identify problems in their local communities. They can also select topics related to global challenges. Students observe problems and try to engage with those who are affected, discuss their thoughts in groups, and make a plan of action, based on scientific evidences.
* Imagine: Students envision and develop creative solutions that can be replicated easily, reach the maximum number of people, generate long-lasting change, and make a quick impact. They are coming in contact with external actors, they are looking for data to support their ideas and they are proposing a series of solutions.
* Create: Students are implementing the project (taking into account the RRI related issues) and they are interacting with external stakeholders to communicate their findings.
* Share: Students share their stories with other schools in the community and local media.
**Figure 3: The OSOS platform will offer students the opportunity to develop
their projects following a simple four step process**
#### 1.2.4.6 Training Academies
With the aim of supporting the effective engagement of teachers, headmasters
and school communities (including parents), OSOS training academies provided
the starting point for equipping them with the competences they need to act
successfully as change agents in their settings. OSOS training academies will
provide extended online materials, webinars and hangouts on a regular basis
(both nationally and internationally) while delivering guidelines for the
generation of creative training events and activities, such as face-to-face
workshops, week-long courses at national or international level. Collaborative
professional development is expected to have a positive impact on teachers’
repertoire of teaching and learning strategies, their ability to match these
to their students’ needs, their self-esteem and confidence and their
commitment to continuous learning and development. During the implementation
process, personal and individualized support is a necessary prerequisite (in
addition to the provision to the suite of supporting tools) to empowering
teachers to engage in innovative practices.
Instead of suggesting a one-fits-all approach to the forms and types of pilot
activities, the schools will be free to choose or design the types of school-
based activities and the curriculum areas that they will target. This is
regarded a form of school innovation to respond to each school individually
with an aim to foster commitment. An implementation thus includes multiple
types of activities, ranging from school-based interventions (that design,
develop and implement small scale projects and local stakeholders) to
collaborative activities across countries designed by the schools in
collaboration with national coordinators.
The Academy will thus develop and test a complete training program for
headmasters and teachers to enable the introduction of responsible innovation
in the European schools. In addition, it will build a permanent support
mechanism for the introduction, the adoption and the acceleration of
responsible innovation in schools through the creation of an OSOS sub-network
of schools and teachers willing to participate in innovative training
activities. In line with the OSOS approach the training academy will focus on
a methodological & pedagogical framework outlining the key stages of the
development of innovation support in schools and will include areas such as:
* School needs analysis tools,
* School leaders and teachers supporting structures
* Tailor-made and CPD relevant Teacher/Learning online communities, • A guide to how turn innovative ideas into real classroom activities
* A guide to teachers in becoming authors of educational content.
The professional development training program for teachers and school leaders
will help facilitating the implementation of the necessary changes, the
development of the necessary diagnostics and intervention skills to best plan
and then diffuse innovation in their own contexts. An effective professional
development approach will provide the starting point for equipping teachers
with the competences they need to act successfully as change agents,
developing a terminology necessary to describe the dynamics of innovative
change processes, and making them able to recognize different forms of
resistance and addressing it in their own context.
### 1.2.5 Use of data provided by the OSOS portal web analytic tools
Existing and newly generated data will be used for the assessment of the
“technological” dimension of the project, including the community building
inside and outside the schools, the introduction of innovation in school
settings, the teachers’ and students’ work, the use of the portal services and
the generation of new content. The data available for this type of assessment
will be available to the responsible project partners and partially to the
headmasters of the schools as managers of the schools’ workspaces and the
relevant communities in the portal.
The data that will be used for the assessment and presentation of relevant
progress in various aspects will be generated:
1. with the use of the **portal analytics tool** that underlies in the portal infrastructure and access directly the data repositories of the portal and
2. **google analytics** that are mostly used to monitor the traffic of the visitors throughout the portal pages. These two tools and the use of the data that generated are following presented.
It is mentioned that all information tracked, logged and presented respects
and **follow the anonymity** of the registered users of the ISE portal. The
privace policy regarding the collection and use of the data provided by the
users in the portal, a relevant Privacy Statement is online available and
presented in the Appendix in 7.1. During the project **no row / unstructured
data will be collected or generated.**
#### 1.2.5.1 The use of data through portal analytics tool
The information that is tracked and collected from the _Analytics Too l _ (
_http://portal.opendiscoveryspace.eu/analytics_ ) is used to monitor the
following elements in the portal that is the basis also all of the analysis
behind the design and implementation of the Tool:
* Content generation
* Schools, teachers, students and stakeholders engagements and participation
* Schools, teachers, students and stakeholders collaboration and networking
* Community building
* Evolution of portal
* Re-use and access of portal content
Considering these dimensions, the base information needed to be collected from
the different repositories of the portal containing the poral data, in order
to be used by the visualization and the reporting tool are defined per
entity/object of the portal and analysed taking into account: (i) The role of
the users that take the relevant triggering actions, (ii) The time period of
the analysis, (iii) The type of the content that is related to these
actions/events
In order to support all these dimensions and specifications, the total of the
actions in the portal that are taken from the visitors and users are tracked.
The entire lists of the actions that are monitored and the events that are
tracked are presented in **Table 1**
##### Table 1: Summary of tracked events by the Portal Analytics Tool
During the OSOS project, this tool and the tracking and reporting mechanisms
will be properly extended to cover and support the analytics also for the new
content types and the new services / features of the portal. The portal
analytics tool provides access to the statistics and their reports. **The
analysis and the presentation of the data do not include any personal
information of the users and they remain anonymous** .
**Figures 4-8** present some examples of the supported analysis of the data.
These examples are based on data produced by the ISE project, but similar
analysis will be provided for the OSOS project too.
**Community building and networking:** refers to analytics related to the
growth of the communities, regarding their number, their relations (network of
communities) and the number of their members.
**Figure 4: ISE communities per domain: presents the number of the communities
created by the registered**
##### users of the portal per subject domain
**User generated content:** these statistics present the growth of the user
generated content, uploaded in the communities that they participate in, as
members and might be resources, projects, events, groups, activities, blogs,
discussions and polls.
that is contributed by the members of the communities in the portal for a
specific time period
**Schools’ engagement and participation:** for the different time periods of
the project, this type of statistics present the number of the schools
enganged in the project activities using the portal services.
country that participate in the project activities, using the portal services
**Social data in the portal:** since the portal provides a number of social
services to its users for networking and also sharing the content in and out
of the portal using social networks’ featurtes, these statistics regard the
extend of use of the social services and the growth of relevant social content
created in the portal.
social data generated in the portal by the use of social services and
represents the preferences of the users on these services while networking and
sharing content
**Type of content used in the authoring tool:** this is related to the type of
the content that is used / embedded in the authored content by the users and
shows how and with which resources the users select to enrich their content.
multimedia or simple textual content that is used by the Teachers when
creating their own educational resources
The queries that are executed to produce such analysis are discriminated into
specific categories in order to be more easily manageable. In particular, the
following query categories are defined:
* Portal related queries,
* Community related queries,
* Group related queries,
* Training Academy related queries,
* Specific Page related queries. Those queries can be viewed under a specific Community, Group or Training Academy. Each one Analyst can create his/her own queries and have no access to the queries created by other Analysts. Only the administrator of the portal has access to the entire set of queries information and results.
#### 1.2.5.2 Google analytics service in the portal
To better support the needs of the portal to monitor the access of its pages
from the visitors (either they are registered or not), the google analytics
service ( https://analytics.google.com) is also activated and supported by
the application infrastructure.
The main objectives and use of this service are:
* To monitor and present the access of the portal pages
* To assess the returning visitors and the newcomers
* To monitor the flows that the users follow in the portal while navigating
* The time that the users “spent” in the portal and in individual pages
* The basic geographical allocation of the visitors
* The basic technological aspects of the terminals tha the visitors use to access the portal (mobile, desktop, types of browsers, etc.)
Some examples that are mainly applicable for the use of google analytics
portal are presented in **Figures 9 to 11**
**Figure 9: Basic statics about the visits and the visitors’ profile of the
portal using google analytics**
of google analytics
**Figure 11: Monitor the visitors’ navigation flow in the portal with google
abalytics**
#### 1.2.5.3 Use of portal data for supporting the assessment tools in the
project
OSOS project focuses among other critical objectives, to the assessment
methodology and measurements of the project impact on the Schools and the
involved teachers, students and stakeholders while implementing “open
schooling” activities planned. To fully support this, the portal is necessary
to provide some information on specific structure to support first of all the
interoperability and feed of information to the project assessment tools.
Towards this direction, the portal will support properly designed web services
to make the necessary information available in a secure and fully protected
manner. This web services will not be available to the public and personal
information of the registered users will be transferred.
### 1.2.6 Data deriving from dissemination and exploitation activities
OSOS is a leading project in the field of Open Schools. Its infrastructure,
approach, methodologies, tools and deliverables will act a reference to future
calls and projects in this area. Taking into account the guidelines set by the
H2020 Online Manual 3 , the consortium will employ a variety of
dissemination, raising awareness as well as exploitation strategies that will
target to reassure the dissemination of project’s activities and outcomes at
national, European level and beyond. Furthermore, it will provide the
mechanisms for effective community building and active participation in order
to encourage a better sharing of experiences among practitioners across
Europe. To maximize dissemination and impact outcomes key stakeholders from
all necessary areas of expertise are included in the consortium. These
institutions are highly reputable within their respective peer groups and thus
have a significant networking and consensus building capacity. To be more
specific, OSOS addresses the recommendations set at the H2020 Online Manual as
follows:
#### 1.2.6.1 Link project to the policy context
OSOS is aiming to support a large number of European schools to implement Open
Schooling approaches by a) developing a model that promote such a culture, b)
offering guidelines and advice on issues such as staff development,
redesigning time, and partnerships with relevant organisations (local
industries, research organisations, parents associations and policy makers),
and c) suggesting a range of possible implementation processes from small-
scale prototypes through to setting up an “open school within a school” or
even designing a new school while it is testing and assessing them in more
than 1,000 school environments in 12 European countries.
The themes of the project activities developed and pursuit in participating
schools that will take place will focus on areas of science linked with the
Grand Societal Challenges as shaped by the EC, will be related to RRI and will
link with regional and local issues of interest. The RRI principles are not
currently integrated to the national educational policies. Members of the
consortium are actively involved in such large-scale initiatives that are
trying to support the implementation of the EU policy and to create a critical
mass of stakeholders who will effectively facilitate the introduction of RRI
principles in the national curricula and to the school practices across
Europe. Collaboration between formal, nonformal and informal education
providers, enterprises and civil society is being enchanced through OSOS in
order to ensure relevant and meaningful engagement of all societal actors with
science and increase the uptake of science studies and science based careers,
employability and competitiveness
#### 1.2.6.2 Involvement of potential end-users and stakeholders
The main target groups of the OSOS dissemination plan are the head-teachers
and STEM teachers and their students in the participating countries as well as
the members of outreach groups of science centers, research, commercial and
industrial organizations at local, national and European level. Specific
dissemination measures are proposed for another important target group, namely
the curriculum developers and the educational policy makers through the
publication of an evaluation report, guidelines and outcomes document.
#### 1.2.6.3 Application of project results
The project partners, both individually and in collaboration, have been
developing, testing and promoting innovative educational applications and
approaches for European schools (supported by relevant appliances and
resources) for many years, which promote sharing and applying of frontier
research findings in schools, supporting the developments of 21 st century
competences through creative problem solving, discovery, learning by doing,
experiential learning, critical thinking and creativity, including projects
and activities that simulate the real scientific work. The aim of the project
is to analytically map the process for the effective usage scenarios of the
afore-mentioned applications in school environments as part of curriculum-led
learning (integrating/embedding them in the everyday school practice) and or
extra-curricular activities (e.g. visits to museums, science centers, research
centers, field trips), coupled with home- and community-centered (informal)
learning experiences. Each open schooling hub will bring together
representatives from industry and civil society associations who – in
cooperation with school community – will scan the horizons, analyse the school
and community needs and will cooperate to design common projects and to
propose innovative solutions.
**1.2.6.4 Barriers to application of project results.**
Currently there are numerous reform efforts to spark innovation in education
systems in the member states. Most of them are focusing on the introduction of
innovative teaching approaches and methodologies (inquiry learning, problem
solving, project based work) in school settings. Nevertheless, the majority of
teachers and schools are remaining committed in the traditional, safe and
wellestablished approaches.
Member states have centralized methods in their disposal to improve education,
in general, and to make STEM careers more attractive to students, in
particular. But no approach can be successfully sustained without bright,
well-prepared, and wellsupported teachers seamlessly interweaving the school
environment and practice into an open and dynamic ecosystem that involves
communities and stakeholders that are currently acting outside the schools.
The lessons from numerous studies (e.g. PISA, 2014) are simple: recruit the
best to be teachers, train them extensively and well, give them the freedom to
develop teaching skills, independence from centralised authority, and ample
time to prepare lessons and to interact with peers and educators outside the
classroom (Burris, 2012). The OSOS support mechanism is supporing the full
cyxle of the envisaged school transformation and conventional teachers’
mindset.
**1.2.6.5 Thinking ahead. Once the project is completed, what further steps
are needed to apply it in actual practice?**
The OSOS support mechanism offers open, interoperable and personalised
solutions meeting the local needs, supports school leaders capture innovation,
to decide on the appropriate strategy to diffuse innovation to the school and
through constant reflection is guiding them towards the transformation of the
school to Open Schooling Hubs and finally to sustainable innovation
ecosystems. The partners of the OSOS consortium have extensive expertise and
experience working with networks of professional practitioners and
consequently have the capacity to deploy effectively the project approach for
community building in order to develop, involve and sustain large communities
of schools and teachers who will integrate the proposed activities in their
settings. OSOS aims to create a multidirectional and multi-level information
flow, which will allow partnership and recipients to learn from each other by
assimilating and acting on the information acquired. The extended and
effective dissemination approach will allow for the development of a wider
collaboration and engagement in the project approach and outcomes. The OSOS
Open Schooling roadmap aims to constitute a common set of guidelines,
recommendations and key messages on and from the project’s overall achieved
outcomes and followed methods. This will provide a useful reference for
helping educators and outreach groups and other key stakeholders including
curriculum developers and educational authorities in designing, integrating
and implementing open schooling and innovation practices in education
programmes across Europe and beyond. It will be offered in all partnerships’
languages, in hard copy and electronic format.
**1.2.6.6 Implementation of open access and support to the Commission’s**
**Open Reasearch Data**
### Pilot
The OSOS project is part of the Open Research Data Pilot. Therefore, it has
developed a data management plan in the first 6 months of the project (this
deliverable) and will keep it up-to-date throughout their project.
Furthermore, the project consortium will: (i) deposit its research data in a
suitable research data repository, (ii) make sure third parties can freely
access, mine, exploit, reproduce and disseminate its data, (iii) make clear
what tools will be needed to use the raw data to validate its research
results, or provide the tools itself.
## 1.3 Origin of the data
During the implementation of the OSOS project the data will be generated only
by the users (headmasters, teachers, experts, stakeholders, students) that
participate in the project activities and the project partners that will use
the services of the portal to support the organization and implementation of
these activities. It is mentioned that the students are not considered as
regular users of the portal, since no personal information is imported and
kept in the portal repositories. The students will use a nickname and a
password to access the projects that they will participate. **Figure 12**
presents the relations among the various portal entities and content types.
Through **Table 2,** the relation of the main entities of the data is
presented and also their contribution in the generation of the different types
of content types of the portal.
<table>
<tr>
<th>
</th>
<th>
_**Educational resources** _
</th>
<th>
_**Schools** _
</th>
<th>
_**Public communities** _
</th>
<th>
_**Private communities** _
</th>
<th>
_**Modules of public** _
_**community** _
</th>
<th>
_**Modules of private** _
_**community** _
</th>
<th>
_**Projects** _
</th>
<th>
_**Accelerators** _
</th>
<th>
_**Analytics** _
</th>
<th>
_**News** _
</th>
<th>
_**eLearning Tools** _
</th>
<th>
_**Training Activities** _
</th> </tr>
<tr>
<td>
_**Administrators of the portal** _
</td>
<td>
_Full_
</td>
<td>
_Full_
</td>
<td>
_Full_
</td>
<td>
_Full_
</td>
<td>
_Full_
</td>
<td>
_Full_
</td>
<td>
_Full_
</td>
<td>
_Full_
</td>
<td>
_Full_
</td>
<td>
_Full_
</td>
<td>
_Full_
</td>
<td>
_Full_
</td> </tr>
<tr>
<td>
_**Teachers** _
</td>
<td>
_C,_
_M,_
_V,_
_Cp,_
_SP_
</td>
<td>
_C,_
_M,_
_V, J_
</td>
<td>
_C,_
_M,_
_V, J_
</td>
<td>
_C,_
_M,_
_V, J_
</td>
<td>
_C, M, V, J_
_(only for_
_members)_
</td>
<td>
_C, M, V, J_
_(only for_
_members)_
</td>
<td>
_C, M,_
_Cp, V_
</td>
<td>
_V_
</td>
<td>
</td>
<td>
_V_
</td>
<td>
_V, SP_
</td>
<td>
_V, SP_
</td> </tr>
<tr>
<td>
_**Stakeholders** _
</td>
<td>
_V_
</td>
<td>
_V_
</td>
<td>
_C,_
_M,_
_V, J_
</td>
<td>
_C,_
_M,_
_V, J_
</td>
<td>
_C, M, V, J_
_(only for_
_members)_
</td>
<td>
_C, M, V, J_
_(only for_
_members)_
</td>
<td>
_V_
</td>
<td>
_V_
</td>
<td>
</td>
<td>
_V_
</td>
<td>
_V_
</td>
<td>
_V_
</td> </tr>
<tr>
<td>
_**News editors** _
</td>
<td>
_V_
</td>
<td>
_V_
</td>
<td>
_V_
</td>
<td>
_V_
</td>
<td>
_V_
</td>
<td>
_V_
</td>
<td>
_V_
</td>
<td>
_V_
</td>
<td>
</td>
<td>
_V_
</td>
<td>
_V_
</td>
<td>
_V_
</td> </tr>
<tr>
<td>
_**Analysts** _
</td>
<td>
_V_
</td>
<td>
_V_
</td>
<td>
_V_
</td>
<td>
_V_
</td>
<td>
_V_
</td>
<td>
</td>
<td>
_V_
</td>
<td>
_C,_
_M,_
_Cp,_
_V_
</td>
<td>
_C,_
_M,_
_V_
</td>
<td>
_V_
</td>
<td>
_V_
</td>
<td>
_V_
</td> </tr>
<tr>
<td>
_**National** _
_**Coordinators** _
</td>
<td>
_C,_
_M,_
_V,_
_Cp_
</td>
<td>
_C,_
_M,_
_V_
</td>
<td>
_C,_
_M,_
_V, J_
</td>
<td>
_C,_
_M,_
_V, J_
</td>
<td>
_C, M, V, J_
_(only for_
_members)_
</td>
<td>
_C, M, V, J_
_(only for_
_members)_
</td>
<td>
_C, M,_
_Cp, V_
</td>
<td>
_V_
</td>
<td>
</td>
<td>
_V_
</td>
<td>
_V_
</td>
<td>
_V_
</td> </tr>
<tr>
<td>
_**School managers** _
</td>
<td>
_C,_
_M,_
_V,_
_Cp,_
_SP_
</td>
<td>
_C,_
_M,_
_V, J_
</td>
<td>
_C,_
_M,_
_V, J_
</td>
<td>
_C,_
_M,_
_V, J_
</td>
<td>
_C, M, V, J_
_(only for_
_members)_
</td>
<td>
_C, M, V, J_
_(only for_
_members)_
</td>
<td>
_C, M,_
_Cp, V_
</td>
<td>
_V_
</td>
<td>
</td>
<td>
_V_
</td>
<td>
_V_
</td>
<td>
_V_
</td> </tr>
<tr>
<td>
_**Students** _
</td>
<td>
_V_
</td>
<td>
_V_
</td>
<td>
_V_
</td>
<td>
_V_
</td>
<td>
_V_
</td>
<td>
</td>
<td>
_EDIT /_
_PUBLISH_
</td>
<td>
_V_
</td>
<td>
</td>
<td>
_V_
</td>
<td>
_V_
</td>
<td>
_V_
</td> </tr>
<tr>
<td>
_**eLearning tools providers** _
</td>
<td>
_V_
</td>
<td>
_V_
</td>
<td>
_V_
</td>
<td>
_V_
</td>
<td>
_V_
</td>
<td>
</td>
<td>
_V_
</td>
<td>
_V_
</td>
<td>
</td>
<td>
_V_
</td>
<td>
_C,_
_M,_
_V_
</td>
<td>
_V_
</td> </tr>
<tr>
<td>
_**News editors** _
</td>
<td>
_V_
</td>
<td>
_V_
</td>
<td>
_V_
</td>
<td>
_V_
</td>
<td>
_V_
</td>
<td>
</td>
<td>
_V_
</td>
<td>
_V_
</td>
<td>
</td>
<td>
_C,_
_M,_
_V_
</td>
<td>
_V_
</td>
<td>
_V_
</td> </tr>
<tr>
<td>
_**Anonymous / unregistered / not logged-in** _
_**visitors** _
</td>
<td>
_V_
</td>
<td>
_V_
</td>
<td>
_V_
</td>
<td>
_V_
</td>
<td>
_V_
</td>
<td>
</td>
<td>
_V_
</td>
<td>
_V_
</td>
<td>
</td>
<td>
_V_
</td>
<td>
_V_
</td>
<td>
_V_
</td> </tr>
<tr>
<td>
_**Community Managers** _
</td>
<td>
_C,_
_M,_
_V,_
_C_
</td>
<td>
_V_
</td>
<td>
_C,_
_M,_
_MM,_
_V, J_
</td>
<td>
_C,_
_M,_
_MM,_
_V, J_
</td>
<td>
_C, M, MM, V, J_
</td>
<td>
_C, M, MM, V, J_
</td>
<td>
_C, M, V, C_
</td>
<td>
_V_
</td>
<td>
_C,_
_M,_
_V_
</td>
<td>
_V_
</td>
<td>
_V_
</td>
<td>
_V_
</td> </tr>
<tr>
<td>
_**Project partners** _
</td>
<td>
_C,_
_M,_
_V, C_
</td>
<td>
_C,_
_M,_
_V,_
_MM_
</td>
<td>
_C,_
_M, V_
</td>
<td>
_C,_
_M, V_
</td>
<td>
</td>
<td>
_C, M, V, J_
_(only for_
_members)_
</td>
<td>
_C, M, V, C_
</td>
<td>
_C,_
_M,_
_V, C_
</td>
<td>
</td>
<td>
_V_
</td>
<td>
_V_
</td>
<td>
_V_
</td> </tr> </table>
**Table 2: User Roles & Privileges ** Abbreviations used to define the
privileges:
**Create (C):** generate new content,
**Manage (M)** : edit / update, delete. The creators of the content have by
default management privileges on the content that they gave created,
**View (V)** : access to view the content of the public communities and the
content of the private communities that the user is a member,
**Copy (Cp)** : create new content IF the IPRs from the originator allows it
creating a clone of the original content,
**Join (J)** : become a member of the entity,
**Manage Membership (MM)** : manage the members of a community or community
module,
**Search based on Profile (SP)** : this option is automatically available to
all registered Teachers for personalized searching of content in the portal,
based on their preferences in their profile. It is mentioned that a user can
have more than one of the roles mentioned in the **Table 2** , based on its
participation in the portal and the project and these cases the super set of
the privileges are taken into account to define the access and management
options of the user in the content of the portal.
## 1.4 Expected size of data
### 1.4.1 Expected number of OSOS Users & Projects
OSOS aims to create a network of 1000 pilot schools through the three phases
presented in subsection 1.2.1: Stimulation, Incubation, and Acceleration. The
aim is to create a pool of cases based on wholeschool projects and initiatives
from science centres and museums or research centres that promote creative
problem solving, discovery, learning by doing, experiential learning, critical
thinking and creativity, simulating the real scientific work. These activities
include use of data analysis tools, access to unique resources, to animations
and simulations of physical processes and experiments, educational games and
projects presenting the basic concepts of science, and more. These activities
will be implemented for one school year in 100 schools in different countries
and locations (both urban and rural) in Europe. The Open Schooling Hubs will
then identify at least 9 schools each. In this way, a network of 1000 schools
will be created including the 100 Open Schooling Hubs. In each one of these
1000 schools, 2 STEM teachers will be involved and each teacher will evaluate
10 students. This number is based on the average available workstations per
school. It is also envisaged that each one of the 1000 schools will create on
average 5 projects.
These facts lead to the estimations presented in **Table** 3:
<table>
<tr>
<th>
**OSOS**
**Schools**
</th>
<th>
**STEM Teachers / School**
</th>
<th>
**Total Teachers Involved**
</th> </tr>
<tr>
<td>
1.000
</td>
<td>
2
</td>
<td>
2.000
</td> </tr>
<tr>
<td>
**Evaluated Students / Teacher**
</td>
<td>
**Total Evaluated Students**
</td> </tr>
<tr>
<td>
10
</td>
<td>
20.000
</td> </tr>
<tr>
<td>
**Projects / School**
</td>
<td>
**Total Projects**
</td> </tr>
<tr>
<td>
5
</td>
<td>
5.000
</td> </tr> </table>
**Table 3: OSOS expected number of Users & Projects **
### 1.4.2 Expected size of data
As OSOS Infrastructure will be based on the ODS and ISE portal, a safe way to
estimate the expected size of data is to examine the relevant information from
the actual use of the ISE portal. The ISE portal currently hosts the data
created from almost 12.000 Teachers and Experts that participated in the ISE
project activities and used the social and community building services to
author and upload their own content. This content includes not only the user
generated data, but also the data tracked for statistical purposes and
currently counts a size of about 27GB. Based on previous analysis, from this
number of users about the 3% of them are uploading their own content in the
portal (~3600 Teachers).
**Table 4** and **Table 5** incude a comparative presentation of the size for
the biggest tables in the database for a period of three (3) years, from 2015
to 2017. Apart from the table that includes caching data and is managed to be
kept in low levels, the rest of the tables have increased their size per ~6%.
Taking into account that OSOS project will last for three (3) years too, the
number of the schools and the teachers that is expected to participate in the
project activities and will make use of the services of ISE portal that
supports the OSOS Incubators and also the size of the data that is produced in
similar authoring tools like the one that is planned to be provided for the
creation of new projects from the teachers and the students based on previous
experience, it is estimated that the size of the portal repository will
increase about 7% up to the end of the project.
<table>
<tr>
<th>
**Table name**
</th>
<th>
**Rows**
</th>
<th>
**Data size (bytes}**
</th> </tr>
<tr>
<td>
</td>
<td>
**2015**
</td>
<td>
**2017**
</td>
<td>
**2015**
</td>
<td>
**2017**
</td> </tr>
<tr>
<td>
**analytics_query_instance_data**
</td>
<td>
95.608,00
</td>
<td>
144.258,00
</td>
<td>
2.695.561.216,00
</td>
<td>
5.033.639.936,00
</td> </tr>
<tr>
<td>
**cache form**
</td>
<td>
17.152,00
</td>
<td>
1.129,00
</td>
<td>
2.617.524.224,00
</td>
<td>
223.363.072,00
</td> </tr>
<tr>
<td>
**u.ser interactions**
</td>
<td>
6.017.588,00
</td>
<td>
5.869.587,00
</td>
<td>
595.542.016,00
</td>
<td>
595.542.016,00
</td> </tr>
<tr>
<td>
**field_revision_field_eo_description**
</td>
<td>
1.743.174,00
</td>
<td>
1.121.787,00
</td>
<td>
486.424.576,00
</td>
<td>
<186424576
</td> </tr>
<tr>
<td>
**field_data_field_eo_description**
</td>
<td>
1.303.370,00
</td>
<td>
1.618.206,00
</td>
<td>
486.440.960,00
</td>
<td>
4.186.440.960,00
</td> </tr>
<tr>
<td>
**field_revision_field_edu_tags**
</td>
<td>
2.194.164,00
</td>
<td>
2.323.848,00
</td>
<td>
199.884.800,00
</td>
<td>
199.868.416,00
</td> </tr>
<tr>
<td>
**taxonomy index**
</td>
<td>
8.194.115,00
</td>
<td>
7.188.723,00
</td>
<td>
400.162.816,00
</td>
<td>
4.103.308.544,00
</td> </tr>
<tr>
<td>
**field_data_field_edu_tags**
</td>
<td>
2.069.767,00
</td>
<td>
2.088.154,00
</td>
<td>
198.868.992,00
</td>
<td>
198.868.992,00
</td> </tr>
<tr>
<td>
**field_revision_body**
</td>
<td>
791.012,00
</td>
<td>
702.165,00
</td>
<td>
400.474.112,00
</td>
<td>
4.122.494.208,00
</td> </tr>
<tr>
<td>
**field data body**
</td>
<td>
355.695,00
</td>
<td>
769.572,00
</td>
<td>
400.441.344,00
</td>
<td>
4.122.445.056,00
</td> </tr> </table>
#### Table 4: Biggest ISE tables (1 of 2)
<table>
<tr>
<th>
**Table name**
</th>
<th>
**Index size (bytes}**
</th>
<th>
**Total size (MB}**
</th>
<th>
**Difference**
</th> </tr>
<tr>
<td>
</td>
<td>
**2015**
</td>
<td>
**2017**
</td>
<td>
**2015**
</td>
<td>
**2017**
</td>
<td>
**%**
</td> </tr>
<tr>
<td>
**analytics_query _instance_data**
</td>
<td>
0,00
</td>
<td>
0,00
</td>
<td>
2.570,69
</td>
<td>
4.800,45
</td>
<td>
46,45%
</td> </tr>
<tr>
<td>
**cache form**
</td>
<td>
2.637.824
</td>
<td>
98.304
</td>
<td>
2.498,78
</td>
<td>
213,11
</td>
<td>
-1072,53%
</td> </tr>
<tr>
<td>
**u.ser interactions**
</td>
<td>
1.049.149.44
</td>
<td>
1.049.149.440
</td>
<td>
1.568,50
</td>
<td>
1.568,50
</td>
<td>
0,00%
</td> </tr>
<tr>
<td>
**field_revision_field_ eo_description**
</td>
<td>
468.189.184
</td>
<td>
468.172.800
</td>
<td>
910,39
</td>
<td>
910,38
</td>
<td>
0,00%
</td> </tr>
<tr>
<td>
**field_data_field_eo_description**
</td>
<td>
420.708.352
</td>
<td>
420.691.968
</td>
<td>
865,13
</td>
<td>
865,11
</td>
<td>
0,00%
</td> </tr>
<tr>
<td>
**field_revision_field_edu_tags**
</td>
<td>
687.489.024
</td>
<td>
687.489.024
</td>
<td>
846,27
</td>
<td>
846,25
</td>
<td>
0,00%
</td> </tr>
<tr>
<td>
**taxonomy index**
</td>
<td>
411.025.408
</td>
<td>
415.219.712
</td>
<td>
773,61
</td>
<td>
780,61
</td>
<td>
0,90%
</td> </tr>
<tr>
<td>
**field_data_field_edu_tags**
</td>
<td>
611.745.792
</td>
<td>
612.794.368
</td>
<td>
773,06
</td>
<td>
774,06
</td>
<td>
0,13%
</td> </tr>
<tr>
<td>
**field_revision_body**
</td>
<td>
200.458.240
</td>
<td>
200.392.704
</td>
<td>
573,09
</td>
<td>
594,03
</td>
<td>
3,53%
</td> </tr>
<tr>
<td>
**field data body**
</td>
<td>
180.387.84
</td>
<td>
180.371.456
</td>
<td>
553,92
</td>
<td>
574,89
</td>
<td>
3,65%
</td> </tr> </table>
#### Table 5: Biggest ISE tables (2 of 2)
<table>
<tr>
<th>
**2**
</th>
<th>
**FAIR Data**
</th> </tr> </table>
## 2.1 Making data Findable, including provisions for metadata
### 2.1.1 Data discoverability – metadata provision and standard
identification mechanism
Metadata is, as its name implies, data about data. It describes the properties
of a dataset. Metadata can cover various types of information. Descriptive
metadata includes elements such as the title, abstract, author and keywords,
and is mostly used to discover and identify a dataset. Another type is
administrative metadata with elements such as the license, intellectual
property rights, when and how the dataset was created, who has access to it,
etc.
OSOS content types along with the ISE portal existing contenct types that
support the OSOS services are featured by the Open Discovery Space LOM
Application profile for their metadata descriptions that is presented in
2.1.5. This standard defined the identification of the data using specific
structur of URIs and the metadata schemed provided support the discoverability
and fully context description of the content. The content that fully adopts
this structure is the educational resources, since it is partially used by the
other content types in order to make the discoverability and searching of the
content easier.
### 2.1.2 Naming Conventions
Since the entire infrastructure of the portal services is based on Drupal (
_Drupal 7.56_ ) the supported conventions are used for the “machine names”
and identities of the data created and hosted, following the core db scheme
with the system and the newly custom created tables, related keys and indexes
presented in **Figure 13**
Especially for the educational resources, an additional mechanism is applied
for the identification included in their XML schema structured based on the
ODS LOM AP identifying every resource by means of dereferenceable URIs.
### 2.1.3 Search keywords
As mentioned, the content in the ISE portal follows the classification and
vocabularies of the ODS AP. The educational resources are fully aligned with
this standardized scheme since other content types partially follow it, in
order to be easily searchable, aligned with the educational resources and the
profiles of the users. The search mecnanism of the portal for the various
content types allows the use of specific facets for narrowing the search and
specify exact criteria, but also supports the use of keywords. The ODS AP
already includes specific elements for defining keywords that characterize the
educational resources and this element, whenever and for the resources that
have it filled, it is used for matching the keywords provided by the users
during with the ones included in the metadata scheme of the resources. This
ensures the effective search of the content and the optimization of the
results. During searching, the keyword that is imported by the users is not
used only in this specific element but also in the following: Title of the
content, Content of the main description and Tags provided either by the
editor or other users of the portal
The superset of the ODS AP elements that are used for searvhing content in the
portal are presented in **Table 6:**
<table>
<tr>
<th>
**Name of element**
</th>
<th>
**Type**
</th>
<th>
**Multiling ual**
</th>
<th>
**Vocabulary - classification**
</th>
<th>
**ODS-AP Path**
</th> </tr>
<tr>
<td>
Title
</td>
<td>
Text
</td>
<td>
yes
</td>
<td>
</td>
<td>
/lom/general/title
</td> </tr>
<tr>
<td>
Author Fullname
</td>
<td>
Text
</td>
<td>
no
</td>
<td>
</td>
<td>
/lom/lifeCycle/contribute/en tity
</td> </tr>
<tr>
<td>
Educational Object Description
</td>
<td>
Text
</td>
<td>
yes
(todo)
</td>
<td>
</td>
<td>
/lom/general/description/stri ng
</td> </tr>
<tr>
<td>
LO Identifier
</td>
<td>
Text
</td>
<td>
no
</td>
<td>
</td>
<td>
/lom/general/identifier/entry
</td> </tr>
<tr>
<td>
ODS general identifier
</td>
<td>
Text
</td>
<td>
no
</td>
<td>
</td>
<td>
/lom/general/identifier/entry
(when catalog = 'ODS')
</td> </tr>
<tr>
<td>
ODS metadata
identifier
</td>
<td>
Text
</td>
<td>
no
</td>
<td>
</td>
<td>
/lom/metametadata/identifi er/entry (when catalog = 'ODS')
</td> </tr>
<tr>
<td>
ODS file location
</td>
<td>
Text
</td>
<td>
no
</td>
<td>
</td>
<td>
\-
</td> </tr>
<tr>
<td>
General Language
</td>
<td>
Term reference
</td>
<td>
no
</td>
<td>
ODS AP Languages
</td>
<td>
/lom/general/language
</td> </tr>
<tr>
<td>
Language
</td>
<td>
language selection
</td>
<td>
no
</td>
<td>
(from internal vocabulary language)
</td>
<td>
/lom/general/language
</td> </tr>
<tr>
<td>
Resource Link
</td>
<td>
link
</td>
<td>
</td>
<td>
</td>
<td>
/lom/technical/location
</td> </tr>
<tr>
<td>
Educational
TypicalAgeRange
</td>
<td>
Text
</td>
<td>
no
</td>
<td>
</td>
<td>
/lom/educational/typicalAge
Range/string
</td> </tr>
<tr>
<td>
Rights Copyright
</td>
<td>
Term reference
</td>
<td>
no
</td>
<td>
ODS AP Rights.Copyright
</td>
<td>
/lom/rights/copyrightAndOth erRestrictions/value
</td> </tr>
<tr>
<td>
Rights Cost
</td>
<td>
Term reference
</td>
<td>
no
</td>
<td>
ODS AP Rights.Cost
</td>
<td>
/lom/rights/cost/value
</td> </tr>
<tr>
<td>
Classification
TaxonPath
</td>
<td>
Text
</td>
<td>
yes
</td>
<td>
</td>
<td>
/lom/classification/taxonpat h/taxon/entry/string
</td> </tr>
<tr>
<td>
Classification Discipline
</td>
<td>
Term reference
</td>
<td>
</td>
<td>
ODS AP
Classification.Discipli ne
</td>
<td>
calculated from
/lom/classification/taxonpat h/taxon/entry/string
</td> </tr> </table>
<table>
<tr>
<th>
Data Provider
</th>
<th>
Term reference
</th>
<th>
</th>
<th>
Repository
</th>
<th>
\-
</th> </tr>
<tr>
<td>
Educational Context
</td>
<td>
Term reference
</td>
<td>
</td>
<td>
ODS AP Educational.Context
</td>
<td>
/lom/educational/context/va lue
</td> </tr>
<tr>
<td>
Edu Tags
</td>
<td>
Term reference
</td>
<td>
</td>
<td>
Edu Tags
</td>
<td>
/lom/general/keyword/string
</td> </tr>
<tr>
<td>
EO Update Date
</td>
<td>
Date
</td>
<td>
</td>
<td>
</td>
<td>
/lom/lifeCycle/contribute/da te/dateTime
</td> </tr>
<tr>
<td>
Aggregation Level
</td>
<td>
Term reference
</td>
<td>
no
</td>
<td>
ODS AP Aggregation Level
</td>
<td>
/lom/general/aggregationLev el/value
</td> </tr>
<tr>
<td>
Media Type
</td>
<td>
Term reference
</td>
<td>
</td>
<td>
ODS AP Technical.Format
</td>
<td>
/lom/technical/format
</td> </tr>
<tr>
<td>
Learning Resource Type
</td>
<td>
Term Reference
</td>
<td>
</td>
<td>
ODS AP
Educational.Learning ResourceType
</td>
<td>
/lom/educational/learningRe sourceType/value
</td> </tr> </table>
**Table 6: Facets used to apply searching in the portal content using ODS AP
elements**
### 2.1.4 Clear versioning
Dynamic versioning will be provided for the Projects created by the Teachers
and edited by the Students on the metadata description and the actual content.
Versioning of information makes a revision of datasets uniquely identifiable
and this feature can be used to determine whether and how data has changed
over time and to define specifically which version the creators / editors are
working with.
Effective data versioning enables understand if a newer version of a dataset
is available and which are the changes between the different versions enabling
comparisons, and preventing confusion. Providing this possibility for OSOS
Projects also the possibility of rolling back in a previous version will be
available to ensure that the creators and editors will effectively and easily
create their content. For this purpose, a version indicator will be used to
identify the separate versions of the Projects. The final method for providing
versioning information is yet definite but specific guidelines will be
followed:
* A unique version number or date as part of the metadata for the dataset will be used.
* A consistent numbering scheme will be used with a meaningful approach to incrementing digits, such as [ SchemaVer] 4 .
* Since a metadata schema will accompany the Projects a URI will be used that will not change as the versions change, but it will be possible to request a specific version through of it.
* Memento [ RFC7089] 5 will be used, to express temporal versioning of a dataset and to access the version that was operational at a given datetime. The Memento protocol aligns closely with the approach for assigning URIs to versions that is used for W3C specifications 6 .
In this context, the Web Ontology Language [ OWL2-QUICK-REFERENCE] 7 and
the Provenance, Authoring and versioning Ontology [ PAV] 8 will be
considered to provide a number of annotation properties for version
information.
### 2.1.5 Metadata creation standards
For the metadata description of the educational content and the classification
for the definition of other content type elements in the portal, the ODS
Application Profile is applied. The **Open Discovery Space LOM Application
Profile** is an IEEE LOM based application profile that enables the
classification and retrieval of learning resources based on their learning
context of use, capable of supporting the demands put forward from the
educational design. The ODS AP does not cover only metadata authored by the
designers of the educational content but also user generated metadata such as
social tags and user reviews including appropriate social tags categories and
evaluative metadata elements.
The main features of the ODS AP are the following:
* The ODS AP includes **all elements** of the IEEE LOM Standard
* The ODS AP includes as **mandatory elements:** (a) the metadata element **1.2 General.Title** , (b) the metadata element **4.3 Technical.Location,** which is used to store the location (e.g URL) of an educational resource and it is a crucial element for accessing an educational resource and (c) the metadata elements **9.1 Classification.Purpose** and **9.2 Classification.Taxon Path,** which are considered as key elements for classifying educational resources, lesson plans and educational scenarios based on learning context of use. These elements are commonly used by the searching mechanisms of existing repositories, collections/federations and in ISE portal too, for the various content types that adopt this classification.
* The ODS AP includes as recommended elements the metadata elements that are used to store the learning context information of educational resources, lesson plans and educational scenarios (except from those considered as mandatory elements) and these are: 1.1 General.Identifier, 1.3 General.Language, 1.5 General.Keyword, 1.8 General.Aggregation Level, 5.6 Educational.Context, 5.7 Educational.Typical Age Range, 5.8 Educational. Difficulty, and 5.9 Educational. Typical Learning Time. Moreover, ODS AP includes as recommended elements additional metadata elements that are frequently used by the searching mechanisms of existing repositories, collections/federations and they have not previously considered for storing learning context information. These additional elements are: 1.4 General.Description, 2.3 LifeCycle.Contribute, 5.2 Educational.Learning Resource Type. Finally, ODS AP includes as recommended elements those elements that are frequently used as mandatory in the examined APs used as basis and these elements are: 3.1 Meta-Metadata.Identifier, 3.2 MetaMetadata.Contribute, 6.1 Rights.Cost, 6.2 Rights.Copyright and Other Restrictions and 6.3 Rights.Description
* All other IEEE LOM elements are included in the ODS AP as **optional elements.**
* For the classifications, a vocabulary is available at : _http://vocbank.opendiscoveryspace.eu/_
In **Table 7, Table 8, Table** 9 **and Table** **10** the not-standardized
elements used in ODS AP are presented since the **Appendix 7.2** presents an
example of a valid ODS AP compatible educational resources.
<table>
<tr>
<th>
**General.Language – 1.3**
</th>
<th>
**Educational.Context – 5.6**
</th> </tr>
<tr>
<td>
**Value Space**
</td>
<td>
**Value Space**
</td> </tr>
<tr>
<td>
en
</td>
<td>
primary education
</td> </tr>
<tr>
<td>
nl
</td>
<td>
secondary education
</td> </tr>
<tr>
<td>
fi
</td>
<td>
informal context
</td> </tr>
<tr>
<td>
fr
</td>
<td>
</td> </tr>
<tr>
<td>
de
</td>
<td>
**Classification.Purpose – 9.1**
</td> </tr>
<tr>
<td>
it
</td>
<td>
**Value Space**
</td> </tr>
<tr>
<td>
el
</td>
<td>
assessment
</td> </tr>
<tr>
<td>
pt
</td>
<td>
discipline
</td> </tr>
<tr>
<td>
lv
</td>
<td>
</td>
<td>
educational objective
</td> </tr>
<tr>
<td>
et
</td>
<td>
</td>
<td>
learning environment
</td> </tr>
<tr>
<td>
lt
</td>
<td>
</td>
<td>
special need
</td> </tr>
<tr>
<td>
es
</td>
<td>
</td>
<td>
teaching approach
</td> </tr>
<tr>
<td>
hr
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
sr
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
bg
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
da
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
**General. Aggregation Level – 1.8**
</td> </tr>
<tr>
<td>
**Value Space**
</td>
<td>
</td>
<td>
**Definition**
</td> </tr>
<tr>
<td>
1
</td>
<td>
</td>
<td>
The smallest level of aggregation, namely, an educational resource
</td> </tr>
<tr>
<td>
2
</td>
<td>
</td>
<td>
A collection of level 1 learning objects, namely, a lesson plan
</td> </tr>
<tr>
<td>
3
</td>
<td>
</td>
<td>
A collection of level 2 learning objects, namely an educational scenario
</td> </tr>
<tr>
<td>
</td>
<td>
**9.2.1 Classification. TaxonPath.Source**
</td> </tr>
<tr>
<td>
</td>
<td>
**Value Space**
</td> </tr>
<tr>
<td>
</td>
<td>
ODS Assessment Vocabulary
</td> </tr>
<tr>
<td>
</td>
<td>
ODS Arts Vocabulary
</td> </tr>
<tr>
<td>
</td>
<td>
ODS Arts Vocabulary
</td> </tr>
<tr>
<td>
</td>
<td>
ODS ICT Vocabulary
</td> </tr>
<tr>
<td>
</td>
<td>
ODS Language Learning Vocabulary
</td> </tr>
<tr>
<td>
</td>
<td>
ODS Mathematics Vocabulary
</td> </tr>
<tr>
<td>
</td>
<td>
ODS Science Vocabulary
</td> </tr>
<tr>
<td>
</td>
<td>
ODS Social Studies Vocabulary
</td> </tr>
<tr>
<td>
</td>
<td>
ODS Learning Outcomes Vocabulary
</td> </tr>
<tr>
<td>
</td>
<td>
ODS Learning Environment Vocabulary
</td> </tr>
<tr>
<td>
</td>
<td>
ODS Special Needs Vocabulary
</td> </tr>
<tr>
<td>
</td>
<td>
ODS Teaching Approaches Vocabulary
</td> </tr> </table>
**Table 7: ODS AP non-standardized elements (1 of 4)**
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
**Value Space**
</th> </tr>
<tr>
<td>
_**Element** _
</td>
<td>
9.1 Classification. Purpose
</td>
<td>
9.2.1 Classification. TaxonPath.Source
</td>
<td>
9.2.2.1 Classification.
TaxonPathTaxon.Id
</td>
<td>
9.2.2.2 Classification. TaxonPathTaxon.Entry
</td> </tr>
<tr>
<td>
_**Value** _
</td>
<td>
assessment
</td>
<td>
ODS Assessment Vocabulary
</td>
<td>
ODS-Assess-1
</td>
<td>
Diagnostic-assessment
</td> </tr>
<tr>
<td>
ODS-Assess-2
</td>
<td>
Peer-assessment
</td> </tr>
<tr>
<td>
ODS-Assess-3
</td>
<td>
Self-assessment
</td> </tr>
<tr>
<td>
ODS-Assess-4
</td>
<td>
Summative assessment
</td> </tr>
<tr>
<td>
ODS-Assess-5
</td>
<td>
Not assessed
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
**“9.1 Classification.Purspose” = “Assessment”**
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
**Value Space**
</td> </tr>
<tr>
<td>
_**Element** _
</td>
<td>
9.1 Classification. Purpose
</td>
<td>
9.2.1 Classification. TaxonPath.Source
</td>
<td>
9.2.2.1
Classification. TaxonPath.
Taxon.Id
</td>
<td>
9.2.2.2 Classification. TaxonPathTaxon.Entry
</td> </tr>
<tr>
<td>
_**Value** _
</td>
<td>
discipline
</td>
<td>
ODS Arts Vocabulary
</td>
<td>
ODS Curriculum – Based Vocabularies
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
**“9.1 Classification.Purpose” = “Discipline: Arts”**
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
**Value Space**
</th> </tr>
<tr>
<td>
_**Element** _
</td>
<td>
9.1 Classification. Purpose
</td>
<td>
9.2.1 Classification. TaxonPath.Source
</td>
<td>
9.2.2.1
Classification. TaxonPath.
Taxon.Id
</td>
<td>
9.2.2.2 Classification. TaxonPathTaxon.Entry
</td> </tr>
<tr>
<td>
_**Value** _
</td>
<td>
discipline
</td>
<td>
ODS ICT Vocabulary
</td>
<td>
ODS Curriculum – Based Vocabularies
</td> </tr>
<tr>
<td>
</td>
<td>
**“9.1 Classification.Purpsose” = “Discipline: ICT”**
</td> </tr>
<tr>
<td>
</td>
<td>
**Value Space**
</td> </tr>
<tr>
<td>
_**Element** _
</td>
<td>
9.1 Classification. Purpose
</td>
<td>
9.2.1 Classification. TaxonPath.Source
</td>
<td>
9.2.2.1
Classification. TaxonPath.
Taxon.Id
</td>
<td>
9.2.2.2 Classification. TaxonPath.Taxon.Entry
</td> </tr>
<tr>
<td>
_**Value** _
</td>
<td>
discipline
</td>
<td>
ODS Language Learning Vocabulary
</td>
<td>
ODS Curriculum – Based Vocabularies
</td> </tr>
<tr>
<td>
</td>
<td>
**“9.1 Classification.Purpsose = “Discipline: Language Learning”**
</td> </tr>
<tr>
<td>
</td>
<td>
**Value Space**
</td> </tr>
<tr>
<td>
_**Element** _
</td>
<td>
9.1 Classification. Purpose
</td>
<td>
9.2.1 Classification. TaxonPath.Source
</td>
<td>
9.2.2.1 Classification.
TaxonPathTaxon.Id
</td>
<td>
9.2.2.2 Classification. TaxonPathTaxon.Entry
</td> </tr>
<tr>
<td>
_**Value** _
</td>
<td>
discipline
</td>
<td>
ODS Mathematics Vocabulary
</td>
<td>
ODS Curriculum – Based Vocabularies
</td> </tr>
<tr>
<td>
</td>
<td>
**“9.1 Classification.Purpsose” = “Discipline: Mathematics”**
</td> </tr>
<tr>
<td>
</td>
<td>
**Value Space**
</td> </tr>
<tr>
<td>
_**Element** _
</td>
<td>
9.1 Classification. Purpose
</td>
<td>
9.2.1 Classification. TaxonPath.Source
</td>
<td>
9.2.2.1 Classification.
TaxonPathTaxon.Id
</td>
<td>
9.2.2.2 Classification. TaxonPathTaxon.Entry
</td> </tr>
<tr>
<td>
_**Value** _
</td>
<td>
discipline
</td>
<td>
ODS Science Vocabulary
</td>
<td>
ODS Curriculum – Based Vocabularies
</td> </tr>
<tr>
<td>
</td>
<td>
**“9.1 Classification.Purpsose” = “Discipline: Science”**
</td> </tr> </table>
**Table 8: ODS AP non-standardized elements (2 of 4)**
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
**Value Space**
</th> </tr>
<tr>
<td>
_**Element** _
</td>
<td>
9.1
Classification Purpose
</td>
<td>
.
</td>
<td>
9.2.1 Classification. TaxonPath.Source
</td>
<td>
9.2.2.1 Classification. TaxonPath. Taxon.Id
</td>
<td>
9.2.2.2 Classification. TaxonPath.Taxon.Entry
</td> </tr>
<tr>
<td>
_**Value** _
</td>
<td>
discipline
</td>
<td>
</td>
<td>
ODS Social Studies Vocabulary
</td>
<td>
ODS Curriculum – Based Vocabularies
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
**“9.1 Classification.Purpsose” = “Discipline: Social Studies”**
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
**Value Space**
</td> </tr>
<tr>
<td>
_**Element** _
</td>
<td>
9.1
Classification.
Purpose
</td>
<td>
9.2.1 Classification.
TaxonPath.Source
</td>
<td>
9.2.1
Classification.TaxoPath. Taxon
</td>
<td>
9.2.2.1 Classification.
TaxonPathTaxon.Id
</td>
<td>
9.2.2.2 Classification. TaxonPathTaxon.Entry
</td> </tr>
<tr>
<td>
_**Value** _
</td>
<td>
educational objective
</td>
<td>
ODS Learning
Outcomes
Vocabulary
</td>
<td>
\-
</td>
<td>
**ODS-EO-01**
</td>
<td>
**Cognitive**
</td> </tr>
<tr>
<td>
ODS-EO-01-01
</td>
<td>
Knowledge
</td> </tr>
<tr>
<td>
ODS-EO-01-01-01
</td>
<td>
Factual
</td> </tr>
<tr>
<td>
ODS-EO-01-01-02
</td>
<td>
Conceptual
</td> </tr>
<tr>
<td>
ODS-EO-01-01-03
</td>
<td>
Procedural
</td> </tr>
<tr>
<td>
ODS-EO-01-01-04
</td>
<td>
Meta – cognitive
</td> </tr>
<tr>
<td>
**ODS-EO-01-02**
</td>
<td>
**Process**
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
ODS-EO-01-02-01
</td>
<td>
To remember
</td> </tr>
<tr>
<td>
ODS-EO-01-02-02
</td>
<td>
To understand
</td> </tr>
<tr>
<td>
ODS-EO-01-02-03
</td>
<td>
To apply
</td> </tr>
<tr>
<td>
ODS-EO-01-02-04
</td>
<td>
To think critically and creatively
</td> </tr>
<tr>
<td>
**ODS-EO-02**
</td>
<td>
**Affective**
</td> </tr>
<tr>
<td>
ODS-EO-02-01
</td>
<td>
To pay attention
</td> </tr>
<tr>
<td>
ODS-EO-02-02
</td>
<td>
To respond and participate
</td> </tr>
<tr>
<td>
ODS-EO-02-03
</td>
<td>
To organize values
</td> </tr>
<tr>
<td>
ODS-EO-02-04
</td>
<td>
To form and follow a system of values
</td> </tr>
<tr>
<td>
**ODS-EO-03**
</td>
<td>
**Psychomotor**
</td> </tr>
<tr>
<td>
ODS-EO-03-01
</td>
<td>
To imitate and try
</td> </tr>
<tr>
<td>
ODS-EO-03-02
</td>
<td>
To perform confidentially following instructions
</td> </tr>
<tr>
<td>
ODS-EO-03-03
</td>
<td>
To perform independently, skillfully and precisely
</td> </tr>
<tr>
<td>
ODS-EO-03-04
</td>
<td>
To adapt and perform creatively
</td> </tr>
<tr>
<td>
**element “9.1 Classification.Purpsose” = “Educational Objective”**
</td>
<td>
</td> </tr> </table>
**Table 9: ODS AP non-standardized elements (3 of 4)**
<table>
<tr>
<th>
</th>
<th>
**Value Space**
</th> </tr>
<tr>
<td>
_**Element** _
</td>
<td>
9.1
Classification.
Purpose
</td>
<td>
9.2.1 Classification.
TaxonPath.Source
</td>
<td>
9.2.2.1 Classification.
TaxonPathTaxon.Id
</td>
<td>
9.2.2.2 Classification. TaxonPathTaxon.Entry
</td> </tr>
<tr>
<td>
_**Value** _
</td>
<td>
learning environment
</td>
<td>
ODS Learning
Environment
Vocabulary
</td>
<td>
ODS-LE-1
</td>
<td>
Audio-based
</td> </tr>
<tr>
<td>
ODS-LE-2
</td>
<td>
Computer-based
</td> </tr>
<tr>
<td>
ODS-LE-3
</td>
<td>
Field-based
</td> </tr>
<tr>
<td>
ODS-LE-4
</td>
<td>
Lab-based
</td> </tr>
<tr>
<td>
ODS-LE-5
</td>
<td>
Lecture-based
</td> </tr>
<tr>
<td>
ODS-LE-6
</td>
<td>
Simulator
</td> </tr>
<tr>
<td>
ODS-LE-7
</td>
<td>
Video
</td> </tr>
<tr>
<td>
ODS-LE-8
</td>
<td>
Work-based
</td> </tr>
<tr>
<td>
</td>
<td>
**“9.1 Classification.Purpsose” = “Learning Environment”**
</td> </tr>
<tr>
<td>
</td>
<td>
**Value Space**
</td> </tr>
<tr>
<td>
_**Element** _
</td>
<td>
9.1
Classification.
Purpose
</td>
<td>
9.2.1 Classification.
TaxonPath.Source
</td>
<td>
9.2.2.1 Classification.
TaxonPathTaxon.Id
</td>
<td>
9.2.2.2 Classification. TaxonPathTaxon.Entry
</td> </tr>
<tr>
<td>
_**Value** _
</td>
<td>
special need
</td>
<td>
ODS Special Needs Vocabulary
</td>
<td>
ODS-SP-1
</td>
<td>
Visual
</td> </tr>
<tr>
<td>
ODS-SP-2
</td>
<td>
Auditive
</td> </tr>
<tr>
<td>
ODS-SP-3
</td>
<td>
Psychomotor
</td> </tr>
<tr>
<td>
ODS-SP-4
</td>
<td>
Behavioural
</td> </tr>
<tr>
<td>
</td>
<td>
**“9.1 Classification.Purpsose” = “Special Need”**
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
**Value Space**
</th> </tr>
<tr>
<td>
_**Element** _
</td>
<td>
9.1
Classification.
Purpose
</td>
<td>
9.2.1 Classification.
TaxonPath.Source
</td>
<td>
9.2.2.1 Classification.
TaxonPathTaxon.Id
</td>
<td>
9.2.2.2 Classification. TaxonPathTaxon.Entry
</td> </tr>
<tr>
<td>
_**Value** _
</td>
<td>
teaching approach
</td>
<td>
ODS Teaching
Approaches
Vocabulary
</td>
<td>
ODS-TA-01
</td>
<td>
Behaviourist
</td> </tr>
<tr>
<td>
ODS-TA-01-01
</td>
<td>
Programmed instruction
</td> </tr>
<tr>
<td>
ODS-TA-01-02
</td>
<td>
Drill and practise
</td> </tr>
<tr>
<td>
ODS-TA-02
</td>
<td>
Cognitivist
</td> </tr>
<tr>
<td>
ODS-TA-02-01
</td>
<td>
Direct instruction
</td> </tr>
<tr>
<td>
ODS-TA-02-02
</td>
<td>
Collaborative learning
</td> </tr>
<tr>
<td>
ODS-TA-02-03
</td>
<td>
Inquiry learning
</td> </tr>
<tr>
<td>
ODS-TA-02-04
</td>
<td>
Problem – based
</td> </tr>
<tr>
<td>
ODS-TA-02-05
</td>
<td>
Reciprocal teaching
</td> </tr>
<tr>
<td>
ODS-TA-03
</td>
<td>
Constructivist
</td> </tr>
<tr>
<td>
ODS-TA-03-01
</td>
<td>
Cognitive apprenticeship
</td> </tr>
<tr>
<td>
ODS-TA-03-02
</td>
<td>
Socratic instruction
</td> </tr>
<tr>
<td>
ODS-TA-03-03
</td>
<td>
Experiential learning
</td> </tr>
<tr>
<td>
ODS-TA-03-04
</td>
<td>
Action research
</td> </tr>
<tr>
<td>
ODS-TA-03-05
</td>
<td>
Communities of practice
</td> </tr>
<tr>
<td>
ODS-TA-03-06
</td>
<td>
Design-based learning
</td> </tr>
<tr>
<td>
</td>
<td>
**“9.1 Classification.Purpsose” = “Teaching Approach”**
</td> </tr> </table>
**Table 10: ODS AP non-standardized elements (4 of 4)**
### Definitions related to the ODS AP
**Metadata Model:** A **metadata model** is a structure description about the
characteristics and properties of any given information resource, and allows
the creation of catalogs and indexes for information resources, along with
searching information on the basis of these characteristics. The metadata
specification used widely for the description of digital resources is Dublin
Core (DC) (Greenberg, 2001).
**Educational Metadata:** In the case of LOs, generic metadata models for
digital resources (such as the Dublin Core model) are not sufficient, because
they do not include information about the educational characteristics of a
given LO. Consequently, specialized models that lay emphasis on the
educational metadata of digital resources have been developed. **Educational
metadata** represent the educational characteristics of a LO, such as the
target groups it involves, or the educational context it addresses. The
metadata model widely used for the description of LOs is the IEEE Learning
Object Metadata (LOM) (IEEE LOM, 2005).
**Application Profile:** An Application Profile (AP) is a metadata scheme,
which consists of metadata elements selected from one or more standard
metadata schemes and it is created for allowing a given application to meet
its functional requirements (Heery and Patel, 2000). The European Committee
for Standardization (CEN/ISSS) defines an Application Profile (AP) as: “an
assemblage of metadata elements selected from one or more metadata schemas and
combined in a compound schema. Application profiles provide the means to
express principles of modularity and extensibility. The purpose of an
Application Profile is to adapt or combine existing schemas into a package
that is tailored to the functional requirements of a particular application,
while retaining interoperability with the original base schemas” (Duval et
al., 2006).
**Social Tagging:** The act of adding keywords, also known as tags, to any
type of digital resource by users (rather than resources’ authors) (Bonino,
2009) is referred to as Social Tagging. The term of Social Tagging has emerged
for those applications that encourage groups of individuals to openly share
their private descriptions (or tags) of digital resources with other users,
either by using a collection of tags created by the individuals for their
personal use (referred to as folksonomy) (Anderson, 2007).
## 2.2 Making data openly Accessible
### 2.2.1 Openly available and Closed Data
A pilot deployment of the Open Learning Content Infrastructure of the OSOS
Infrastrucute allows all future educational content resources to be linked to
portal learning metadata and hosts a subset of the educational content and
metadata repositories at a virtual e-Infrastructure. To this aim, a virtual
e-infrastructure is provided and a full set of portal metadata (around 600,000
e-learning resources) which all were structured according to the ODS AP
schema, are exposed as Linked Open Data.
The full RDF dump of portal includes about 594K metadata records exposed as
Linked Data with about 14M triples. The format of the RDF (see **Figure 14** )
exposed is available to be downloaded at the following links:
_http://data.opendiscoveryspace.eu/ODS_LOM2LD/ODS_SecondDraft.html_ . The
linked data exposure is available at _http://data.opendiscoveryspace.eu_ .
The SPARQL endpoint is also available at
_http://data.opendiscoveryspace.eu/sparql.tpl_ .
Apart from this way of making the educational resources openly available, also
specific APIs are provided and fully described in **subsection 2.2.2**
### 2.2.2 Data availability ways
#### 2.2.2.1 Open APIs of the OSOS Infrastrucure
For accessing the data of OSOS Infrastructure, a number of open APIs are
available to support further integration and collaboration of the portal with
external tools that are properly authorized to use these services and access
the data.
The first step for a “tool” or third-party application to het access on the
integration services is to be registered and authenticated by the
administrators of the portal.
These APIs enable all educational data produced in the OSOS Infrastructure to
be shared with other projects/initiatives/entities. The APIS along with their
features and attributes are presented in **Tables 11-19.**
### • User Management Services
<table>
<tr>
<th>
**Method:**
</th>
<th>
**getUser**
</th> </tr>
<tr>
<td>
**Description**
</td>
<td>
</td>
<td>
Retrieves data of specific user participating in the external tools’ community
</td> </tr>
<tr>
<td>
**Method Input:**
<table>
<tr>
<th>
Parameter Name
</th> </tr>
<tr>
<td>
Community_id
</td> </tr>
<tr>
<td>
User_id
</td> </tr> </table>
</td>
<td>
<table>
<tr>
<th>
Parameter Type
</th>
<th>
Description
</th> </tr>
<tr>
<td>
String
</td>
<td>
The id of the external tools’ community node.
</td> </tr>
<tr>
<td>
String
</td>
<td>
The id of the user
</td> </tr> </table>
</td> </tr>
<tr>
<td>
**Method Output:**
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
<table>
<tr>
<th>
Parameter Type
</th> </tr> </table>
</td>
<td>
<table>
<tr>
<th>
Description
</th> </tr> </table>
</td>
<td>
</td> </tr>
<tr>
<td>
JSON
</td>
<td>
The user data are provided in the following format
{"username":"","mail":"","gender":"","full_name":"","organization":""}
</td> </tr>
<tr>
<td>
</td>
<td>
**Fault:**
</td>
<td>
</td> </tr>
<tr>
<td>
Exception Name
</td>
<td>
</td>
<td>
Description
</td>
<td>
</td> </tr>
<tr>
<td>
DBOperationException
</td>
<td>
</td>
<td>
Thrown in case the operation fails to be executed
</td> </tr> </table>
**Table 11: User Management Services Open API (1 of 4)**
<table>
<tr>
<th>
**Method:**
</th>
<th>
**notify**
</th> </tr>
<tr>
<td>
**Description**
</td>
<td>
</td>
<td>
Used for notification when a user account is deleted
</td> </tr>
<tr>
<td>
**Method Input:**
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
Parameter Name
</td>
<td>
Parameter Type
</td>
<td>
Description
</td>
<td>
</td> </tr>
<tr>
<td>
User_id
</td>
<td>
String
</td>
<td>
The id of the user account deleted.
</td> </tr>
<tr>
<td>
</td>
<td>
**Method Output:** N/A
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
**Fault:**
</td>
<td>
</td> </tr>
<tr>
<td>
Exception Name
</td>
<td>
</td>
<td>
Description
</td>
<td>
</td> </tr>
<tr>
<td>
DBOperationException
</td>
<td>
</td>
<td>
Thrown in case the operation fails to be executed
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
**Table 12: User Management Services Open API (2 of 4)**
<table>
<tr>
<th>
**Method: getUsersOfCommunity**
</th> </tr>
<tr>
<td>
**Description**
</td>
<td>
Retrieves all available users participated in the external tools’ community.
</td> </tr>
<tr>
<td>
**Method Input:**
<table>
<tr>
<th>
Parameter Name
</th>
<th>
Parameter Type
</th>
<th>
Description
</th> </tr>
<tr>
<td>
Community_id
</td>
<td>
String
</td>
<td>
The id of the community node.
</td> </tr> </table>
</td> </tr>
<tr>
<td>
**Method Output:**
</td> </tr>
<tr>
<td>
</td>
<td>
Parameter Type
</td>
<td>
Description
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
JSONArray
</td>
<td>
The retrieved data are provided in the following format
[{"uid":"","name":"","mail":"","full_name":"","gender":"","organization"
:""}]
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
**Fault:**
</td>
<td>
</td> </tr>
<tr>
<td>
Exception Name
</td>
<td>
</td>
<td>
Description
</td>
<td>
</td> </tr>
<tr>
<td>
DBOperationException
</td>
<td>
</td>
<td>
Thrown in case the operation fails to be executed
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
**Table 13: User Management Services Open API (3 of 4)**
<table>
<tr>
<th>
**Method: createUser**
</th> </tr>
<tr>
<td>
**Description**
</td>
<td>
Creates a user under the external tools’ community. If the username provided
already exists then it is considered that the user exists and the data of the
user are updated.
</td> </tr>
<tr>
<td>
**Method Input:**
<table>
<tr>
<th>
Parameter Name
</th>
<th>
Parameter Type
</th>
<th>
Description
</th> </tr>
<tr>
<td>
Community_id
</td>
<td>
String
</td>
<td>
The id of the external tools’ community node.
</td> </tr>
<tr>
<td>
User_data
</td>
<td>
JSON
</td>
<td>
The format of the user data are as follow
{"password":"","username":"","mail":"","gender":"","full
_name":"","organization":""}
</td> </tr> </table>
</td> </tr>
<tr>
<td>
**Method Output:** N/A
</td>
<td>
</td> </tr>
<tr>
<td>
**Fault:**
</td> </tr>
<tr>
<td>
</td>
<td>
Exception Name
</td>
<td>
Description
</td>
<td>
</td> </tr>
<tr>
<td>
DBOperationException
</td>
<td>
Thrown in case the operation fails to be executed
</td> </tr> </table>
**Table 14: User Management Services Open API (4 of 4)** • **Group Management
Services**
<table>
<tr>
<th>
**Method: createGroup**
</th> </tr>
<tr>
<td>
**Description**
</td>
<td>
Redirects user to the OSOS group creation page under the tools’ community.
</td> </tr>
<tr>
<td>
**Method Input:**
</td> </tr>
<tr>
<td>
</td>
<td>
Parameter Name
</td>
<td>
Parameter Type
</td>
<td>
Description
</td>
<td>
</td> </tr>
<tr>
<td>
Community_id
</td>
<td>
String
</td>
<td>
The id of the community node under which the group will be created.
</td> </tr>
<tr>
<td>
**Method Output:** N/A
</td> </tr>
<tr>
<td>
**Fault:**
<table>
<tr>
<th>
Exception Name
</th>
<th>
Description
</th> </tr>
<tr>
<td>
DBOperationException
</td>
<td>
Thrown in case the operation fails to be executed
</td> </tr>
<tr>
<td>
</td>
<td>
</td> </tr> </table>
</td> </tr> </table>
**Table 15: Group Management Services Open API (1 of 4)**
<table>
<tr>
<th>
**Method: getGroup**
</th> </tr>
<tr>
<td>
**Description**
</td>
<td>
Retrieves data of the specified group.
</td> </tr>
<tr>
<td>
**Method Input:**
</td> </tr>
<tr>
<td>
</td>
<td>
Parameter Name
</td>
<td>
Parameter Type
</td>
<td>
Description
</td>
<td>
</td> </tr>
<tr>
<td>
Group_id
</td>
<td>
String
</td>
<td>
The id of the group node.
</td> </tr>
<tr>
<td>
**Method Output:**
<table>
<tr>
<th>
Parameter Type
</th>
<th>
Description
</th> </tr>
<tr>
<td>
JSON
</td>
<td>
The user data are provided in the following format
{"language":"","title":"","createdDate":"","number_ofParticipanrs":""}
</td> </tr> </table>
</td> </tr>
<tr>
<td>
**Fault:**
</td> </tr>
<tr>
<td>
</td>
<td>
Exception Name
</td>
<td>
Description
</td>
<td>
</td> </tr>
<tr>
<td>
DBOperationException
</td>
<td>
Thrown in case the operation fails to be executed
</td> </tr>
<tr>
<td>
</td>
<td>
</td> </tr> </table>
**Table 16: Group Management Services Open API (2 of 4)**
<table>
<tr>
<th>
**Method: getUsersOfGroup**
</th> </tr>
<tr>
<td>
**Description**
</td>
<td>
Retrieves all available users participating in a specific group.
</td> </tr>
<tr>
<td>
**Method Input:**
<table>
<tr>
<th>
Parameter Name
</th>
<th>
Parameter Type
</th>
<th>
Description
</th> </tr>
<tr>
<td>
Group_id
</td>
<td>
String
</td>
<td>
The id of the group node.
</td> </tr> </table>
</td> </tr>
<tr>
<td>
**Method Output:**
<table>
<tr>
<th>
Parameter Type
</th>
<th>
Description
</th> </tr>
<tr>
<td>
JSONArray
</td>
<td>
The retrieved data are provided in the following format
[{"uid":"","name":"","mail":"","full_name":"","gender":"","organization"
:""}]
</td> </tr> </table>
</td> </tr>
<tr>
<td>
**Fault:**
</td> </tr>
<tr>
<td>
</td>
<td>
Exception Name
</td>
<td>
Description
</td>
<td>
</td> </tr>
<tr>
<td>
DBOperationException
</td>
<td>
Thrown in case the operation fails to be executed
</td> </tr>
<tr>
<td>
</td>
<td>
</td> </tr> </table>
**Table 17: Group Management Services Open API (3 of 4)**
<table>
<tr>
<th>
**Method:**
</th>
<th>
**addUser**
</th> </tr>
<tr>
<td>
**Description**
</td>
<td>
</td>
<td>
Adds a user in a Group under the external tools’ community.
</td> </tr>
<tr>
<td>
**Method Input:**
<table>
<tr>
<th>
Parameter Name
</th> </tr>
<tr>
<td>
Group_id
</td> </tr>
<tr>
<td>
User_id
</td> </tr> </table>
P
St
St
</td>
<td>
<table>
<tr>
<th>
arameter Type
</th>
<th>
Description
</th> </tr>
<tr>
<td>
ring
</td>
<td>
The id of the external tools’ community group.
</td> </tr>
<tr>
<td>
ring
</td>
<td>
The id of the user.
</td> </tr> </table>
</td> </tr>
<tr>
<td>
**Method Output:** N/A
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Fault:**
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
Exception Name
</td>
<td>
Description
</td>
<td>
</td> </tr>
<tr>
<td>
DBOperationException
</td>
<td>
Thrown in case the operation fails to be executed
</td> </tr>
<tr>
<td>
</td>
<td>
</td> </tr> </table>
**Table 18: Group Management Services Open API (4 of 4)**
### • Resource sharing Services
<table>
<tr>
<th>
**Method: postScenario**
</th> </tr>
<tr>
<td>
**Description**
</td>
<td>
Post scenario as a reference in the community page.
</td> </tr>
<tr>
<td>
**Method Input:**
<table>
<tr>
<th>
Parameter Name
</th>
<th>
Parameter Type
</th>
<th>
Description
</th> </tr>
<tr>
<td>
Community_id
</td>
<td>
String
</td>
<td>
The id of the external tools’ community node.
</td> </tr>
<tr>
<td>
Scenario_data
</td>
<td>
JSON
</td>
<td>
The format of the scenario data are as follow
{“scenario_id”:”…”, “title”:”…” , “description”:”…”, “url”:”…”}
</td> </tr> </table>
</td> </tr>
<tr>
<td>
**Method Output:** N/A
</td>
<td>
</td> </tr>
<tr>
<td>
**Fault:**
</td> </tr>
<tr>
<td>
</td>
<td>
Exception Name
</td>
<td>
Description
</td>
<td>
</td> </tr>
<tr>
<td>
DBOperationException
</td>
<td>
Thrown in case the operation fails to be executed
</td> </tr> </table>
**Table 19: Resource Sharing Services Open API**
#### 2.2.2.2 Search OSOS Infrastructure educational resources
To facilitate the external tools basic-alignment, a JavaScript component is
also availabe that offers functionality to search the ISE repository using the
SOLR search engine operating on top of it. The SOLR search engine provides
powerful searching mechanisms including full-text search and faceted search
through REST-like HTTP/XML and JSON APIs that make it easy to use from
virtually any programming language.
Figure 15 **and Figure 16** depict the use of this special JavaScript-based
component.
Figure 15 depicts the case where the component has not any search parameters
defined yet. In the left hand column, the user is able to specify various
search parameters such as a search string, the educational context, the
repository from which the desired resources should come from or the date
related to the resource. Note that these parameters are indicative and the
component can be configured to include many more.
**Figure 16** depicts a situation where a number of search parameters have
already been specified:
* “newton law” search string
* “en” language (this stands for English)
* “Cosmos” repository
The result set shown in the right-hand side corresponds to metadata records
that refer to English resources residing at the Cosmos repository that are
describing Newton’s law in physics. The user can further refine the search by
adding new parameters or delete any of the above three parameters to make the
search more general.
**Figure 16: The search page after specifying a number of search criteria**
In **Appendix 7.3** more technical details about the use of the Search API are
provided
#### 2.2.3 Methods, software tools and documentation to access OSOS data
The methods software and tools that can be used for the different ways of
accessing the portal data are presented under the Training Academies of the
portal including documentation, descriptive and supporting material. The
following activities have beed made available online:
##### 2.2.3.1 Title: ODS Moodle plugin
**Description:** he main target of the current activity is to make an
introduction and give all the necessary guidelines for the Moodle Blocks that
have been developed to enable the alignment of Moodle installations with the
portal. The objective behind these modules is to facilitate the inclusion of
stakeholders that use Moodle installations in ODS, since there are several
schools that use Moodle to organize their content and support their learning
activities. In this context, two Moodle Blocks were implemented. The first one
provides access to the ODS repository search services and uses a generic
JavaScript-based component. The second Moodle block provides the necessary
functionality to enable harvesting of Moodle courses from the ODS harvester,
thus offering the capability to the users of a Moodle installations that uses
this block to publish their educational content in the ODS repository. The
first block essentially provides basic-level alignment while the second block
provides a systematic way of entry-level alignment of Moodle installations.
**URL** : _http://portal.opendiscoveryspace.eu/en/tr-activity/developing-ods-
moodle-plugin-introductionfuture-developers-834760_
##### 2.2.3.2 Title: Metadata Ingestion
**Description:** In this learning module the content providers will be
presented with the basics of metadata harvesting. The most common protocols
and technologies will be explored and foundations on how a harvesting-based
population federation works will be given. Special attention will be paid to
the concept of harvesting, the role of harvesting and OAI-PMH in ODS, and to
the implementation of learning objects collections through an OAI-PMH
endpoint.
**URL:** _http://portal.opendiscoveryspace.eu/en/tr-activity/metadata-
ingestion-ods-401764_
##### 2.2.3.3 Title: Building with ODS API
**Description:** This Training Activity focuses on the training of Technology
Developers on building authoring tools compatible with the ODS system. It most
particularly takes into consideration the development framework of ODS in
order to allow people to either develop their own authoring tools or extend
the existing metadata and scenario authoring tools.
**URL:** _http://portal.opendiscoveryspace.eu/en/tr-activity/building-
authoring-tools-compatible-odssystem-building-ods-api-668547_
For the Search API of the portal relevant documentation for the basic
architecture and features is uploaded here:
_https://github.com/evolvingweb/ajax-solr/wiki/Architectural-overview_ .
#### 2.2.4 Data, metadata, code and documentations repositories
The data, metadata, code and documentation repositories are all rely on the
same infrastructure and they are hosted as virtual machines in the GRNET ViMa
9 VPS service. The full list of the repositories is presented: (i)
**Application server (** Hosts the Drupal installation (version 7.22) over a
typical Apache HTTPd/PHP stack using the prefork multi-processing module 10
. Accepts HTTP connections only from the reverse proxy. Also hosts all the
static content of the ODS portal (documents, presentations, images etc.), (ii)
**Database server (** Holds ODS portal’s data in a MySQL database (version
5.5.37) using InnoDB as storage engine. Current database size reaches 27 GB
spanning in 982 tables.), (iii) **Web server (Responsible** for serving all
HTTP user requests. Uses an Apache HTTPd in reverse proxy mode).
For the new features of OSOS Incubators, a separate virtual machine will be
engaged in the same infrastructure, and especially for the Project Authoring
tool, for several reasons:
* It is expected that a siginificant amount of data will be produced during the creation and edit of the projects from the Teachers and the Students since also a versioning feature will be supported. So, the mass of the produced data wil be high
* The use of the tools from the students requires the concurrent use of the portal services and the tools and it is expected that the load of traffic will be significantly increased
* It will ease the better management of the load and the more effective support of the service since this division will positively affect the performance of the entire system
* Hosting the data of the Projects in a separate repository will ease and fasten the analysis to procude the proper statistics for the assessment framework of the project
It is mentioned that the provider of the infrastructure already supports the
portal services for openness of data, supporting and providing the respective
services of the portal and will continue the support for the OSOS project
lifetime and features too.
#### 2.2.5 Restrictions
The project actions and results are aligned with similar international
initiatives and its various phases are developed taking into account input
from key external players (educational authorities, international outreach
groups, scientific organizations).
The consortium will make an exact plan so the results of the project to be
presented in numerous conferences and workshops in Europe and beyond (e.g.
EPS, GIREP, ESERA, EHTA biennial conferences, etc.). Upon the delivery of the
first project results, papers will be submitted to scientific journals and
magazines focusing on education, innovation education, STEM education. The
papers produced by OSOS will be open access so that they can be immediately
and freely available to the wide public (using more precisely the gold open
access model). Furthermore, according to the OSOS Grant Agreement 12, each
OSOS partner must ensure open access (free of charge online access for any
user) to all peerreviewed scientific publications relating to its results.
In particular, it must:
* as soon as possible and at the latest on publication, deposit a machine-readable electronic copy of the published version or final peer-reviewed manuscript accepted for publication in a repository for scientific publications; Moreover, the beneficiary must aim to deposit at the same time the research data needed to validate the results presented in the deposited scientific publications.
* ensure open access to the deposited publication — via the repository — at the latest on publication, if an electronic version is available for free via the publisher, or within six months of publication
* ensure open access — via the repository — to the bibliographic metadata that identify the deposited publication.
Regarding the digital research data generated in the action (‘data’), the
beneficiaries must:
* deposit in a research data repository and take measures to make it possible for third parties to access, mine, exploit, reproduce and disseminate — free of charge for any user — the following: (i) the data, including associated metadata, needed to validate the results presented in, (ii) scientific publications as soon as possible, (iii) other data, including associated metadata, as specified and within the deadlines laid down
_ 12 _ Grant Agreement number: 741572 — OSOS —
H2020-SwafS-2016-17/H2020-SwafS-
2016-1, Article 29
• provide information — via the repository — about tools and instruments at
the disposal of the beneficiaries and necessary for validating the results
(and — where possible — provide the tools and instruments themselves).
Regarding the sensitive student related data generated and collected through
questionnaires, projects and activities throughout the project, each school’s
responsible OSOS manager is able to decide if and when these data can be
shared or used.
At this case, a Creative Commons License will be used. (see **subsection
2.4.1** ).
## 2.3 Making data Interoperable
All methods, tools and standards decribed in the previous sections ensure the
interoperability aspect of the data generated in the ISE portal and the
services of the portal as well. All standards that cover this aspect are
listed:
* **ODS LOM AP:** Metadata description of educational resources and related content
* **UNESCO ICT Competency Framework for Teachers** 11 **:** For the Teachers’ profiles, to be able to match teachers’ competences with educational resources descriptions (through IEEE LOM related metadata)
* **IEEE LOM** 12 **:** Standard’s vocabularies are used for the classification of the ODS AP elements
* **OAI-PMH:** For metadata publishing and harvesting
## 2.4 Increase data re-use
### 2.4.1 License schemes of OSOS Data to permit the widest use possible
The licencing scheme that is applied in the portal for the educational content
follows and applies the framework and standard restrictions specified by the
Creative Commons License. ( _https://creativecommons.org_ ) . The
originators of the content define the level of use and accessibility when
creating new content and the selected option is applied for all users and
visitors of the portal.
For the educational content that the originator declares that is fully open to
access and use the rules applied by the privacy levels of the community are
enforced. The selected options are recorded and defined by specific elements
in the metadata description of the educational content and thus is ensured
that all future uses and presentation of the content will follow them in case
of aggregation of the portal repository by a third, external repository.
### 2.4.2 Re-use of Data
All the four types of Data collected and generated during the project’s
lifetime and after its termination, will be re-usable by the massive community
of OSOS stakeholders. OSOS will deploy an open learning content infrastructure
that aggregates existing repositories and tools into a critical mass of
e-learning contents, covering around 1,000,000 e-learning resources from 75
educational repositories across the globe. Moreover, OSOS adopts social
networking tools and offers the opportunity for the development of lightweight
school-based portals to support the development of school-generated content,
the assessment of the school’s openness level and its cultural change.
Addtitionaly, the consortium will include a key player in the field of digital
education in Australia in order to inform the proposed OSOS framework and the
resulting services with findings and initiatives taking place in different
places of the world where school openness and innovation is at the highest
level of the educational policy agenda. More specifically the consortium
includes as partner the Curtin University of Technology (CURTIN).
Curtin University of Technology (CURTIN) has considerable research expertise
in key areas of this proposal namely, (a) in modelling schools as
organizations as well as specific school actors such as teachers (b) digital
systems for open access to education. In this framework CURTIN will contribute
to WP2 and WP6.
Furthermore, the OSOS consortium set up a collaborative scheme with the Office
for Digital Learning of the Massachusetts Institute of Technology (MIT). MIT
ODL team are the initiators of the edX platform. MIT focusing on the recent
Initiative for Learning and Teaching (TILT) that aims to bring the essence of
the MIT learning approach beyond the borders of the campus to K-12 learners
and teachers around the world. TILT looks to fill a growing need in science
education by initiating new research, design, and outreach programs that will
transform both how students learn and how we understand how students learn.
TILT is following similar approaches with OSOS and offers a unique opportunity
to follow its developments and to cooperate systematically with the MIT team
in the OSOS Strategies (WP2) and to the Open Schooling Best Practices and
Initial Pilots (WP4).
This collaboration scheme is based on the implementation of the “Arrangement
between the European Commission and the Government of the United States of
America for cooperation between researchers funded separately by the European
Union’s and the United States Framework Programmes on Research and innovation
(signed on 17/10/2016)”. Thus, the OSOS coordinating entity, EA, signed an
agreement related to the Project with MIT ODL so to a) offer specific services
to the project and b) reach a common understanding in respect of intellectual
property rights, data access, and data dissemination and other matters
considered essential to collaboration and to lay that understanding down in a
Memorandum of Understanding among themselves. The Horizon 2020 OSOS
coordinating entity (EA) ensures that any understanding reached is compatible
with the obligations under the Horizon 2020 project and the US partner (MIT)
likewise ensures it is compatible with the obligations under its funding
mechanism (e.g. National Science Foundation of USA (http://www.nsf.gov/).
# 3 Allocation of resources
## 3.1 Costs of making OSOS data FAIR
OSOS’ infrastructure will be based on the existing ODS and ISE portal and
their huge community and precious content. The costs for setting up this
infrastructure is analysed in **Table 20**
<table>
<tr>
<th>
**Cost Item**
</th>
<th>
**Total Investment**
</th> </tr>
<tr>
<td>
Development of the Open Discovery Space ( _http://opendiscoveryspace.eu/_ )
portal (50% co-funded)
</td>
<td>
10.000.000 €
</td> </tr>
<tr>
<td>
Development of Inspring Science Education (www.inspiringscience.eu) content
(50% co-funded)
</td>
<td>
8.000.000 €
</td> </tr>
<tr>
<td>
OSOS Authoring Tool for student projects
</td>
<td>
200.000 €
</td> </tr>
<tr>
<td>
**Total**
</td>
<td>
**18.200.000 €**
</td> </tr> </table>
**Table 20: OSOS needed infrastructure cost**
## 3.2 Data management responsibilities in the OSOS Infrastructure
For the effective, secure and proper creation and management of the content,
the portal includes specific roles, properly assigned with access and
administration privileges
**Administrators** : they have full access in the portal content and services.
Only a few people are assigned with this role that are responsible for the
administration of the portal and support of users.
**Registered users** : these are users that participate in the implementation
of the project from the schools’ and stakeholders’ side and might be teachers
or experts. This role is defined by the user during his/her registration or by
the administrator of the portal.
**National coordinators** : The National Coordinators are mainly responsible
for the registration and management of the Schools. This role is assigned by
the administrator of the portal.
**School manager** : this role is assigned by the National Coordinators to
teachers that are responsible to manage the profile of the Schools under of
which they are registered.
**Community Managers** : this role is automatically assigned to the registered
user that creates a new community in the portal and have access to the
administration of the content of the community and also the members of it. The
Community Manager initializes among other descriptive elements, the school
under of which this community belongs to (if he/she is a Teacher) and if a
community is public or private.
**Students** : these are not regular users of the portal. They access the
projects that they participate using only a nickname and password that are
related only to the project, the url of which is shared by their teacher. No
further information or personal data is imported and kept in the portal.
**eLearning tools managers** : this role is assigned to users that are
responsible for the management of the eLearning tools in the portal. This role
is assigned by the administrator of the portal.
**Training activity contributors** : the users assigned with this role are
responsible for the management of the Training Activities. This role is
assigned by the administrator of the portal.
**Tool providers** : this role is assigned to users that are responsible for
the registering eLearning tools in the portal. This role is assigned by the
administrator of the portal.
**News editor** : the users assigned with this role are responsible for the
management of the News in the portal. This role is assigned by the
administrator of the portal.
**Analyst** : the users with this role can create queries on the analytics and
view the results in the form or reports or download them as excel files. This
role is assigned by the administrator of the portal.
## 3.3 Costs and value of long term preservation
The annual administrative costs of the OSOS Portal are estimated around 40.760
euros, while the annual maintenance costs are estimated at 13.920 euros. The
Total Annual Costs for running the Osos Portal & Support Mechanisms is
estimated at 54.680 euros and is analysed in **Table 21.**
<table>
<tr>
<th>
**PORTAL ADMINISTRATION**
</th>
<th>
**Calculated on monthly basis**
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<th>
# /
month
</th>
<th>
time
(min)
</th>
<th>
total time
(min)
</th>
<th>
total time (hours)
</th>
<th>
days
</th>
<th>
**Cost /**
**month**
</th>
<th>
**YEARLY**
</th> </tr>
<tr>
<td>
**Administration Support**
</td>
<td>
</td> </tr>
<tr>
<td>
User registrations
</td>
<td>
60
</td>
<td>
2
</td>
<td>
120
</td>
<td>
2
</td>
<td>
0,25
</td>
<td>
80,00 €
</td>
<td>
960,00 €
</td> </tr>
<tr>
<td>
Community requests
</td>
<td>
25
</td>
<td>
3
</td>
<td>
75
</td>
<td>
1,25
</td>
<td>
0,16
</td>
<td>
50,00 €
</td>
<td>
600,00 €
</td> </tr>
<tr>
<td>
Users' communication
</td>
<td>
10
</td>
<td>
70
</td>
<td>
700
</td>
<td>
11,67
</td>
<td>
1,46
</td>
<td>
466,67 €
</td>
<td>
5.600,00 €
</td> </tr>
<tr>
<td>
Community statistics
</td>
<td>
1
</td>
<td>
480
</td>
<td>
480
</td>
<td>
8
</td>
<td>
1
</td>
<td>
320,00 €
</td>
<td>
3.840,00 €
</td> </tr>
<tr>
<td>
Portal statistics
</td>
<td>
0,33
</td>
<td>
2400
</td>
<td>
800
</td>
<td>
13,33
</td>
<td>
1,67
</td>
<td>
533,33 €
</td>
<td>
6.400,00 €
</td> </tr>
<tr>
<td>
ISE tools statistics
</td>
<td>
0,33
</td>
<td>
300
</td>
<td>
100
</td>
<td>
1,67
</td>
<td>
0,21
</td>
<td>
66,67 €
</td>
<td>
800,00 €
</td> </tr>
<tr>
<td>
Maintaine of statistics file
</td>
<td>
1
</td>
<td>
120
</td>
<td>
120
</td>
<td>
2,00
</td>
<td>
0,25
</td>
<td>
80,00 €
</td>
<td>
960,00 €
</td> </tr>
<tr>
<td>
**Dissemination Support**
</td>
<td>
</td> </tr>
<tr>
<td>
e-mail communication
</td>
<td>
2
</td>
<td>
960
</td>
<td>
1920
</td>
<td>
32
</td>
<td>
4
</td>
<td>
1.280 €
</td>
<td>
15.360 €
</td> </tr>
<tr>
<td>
News admin
</td>
<td>
3
</td>
<td>
20
</td>
<td>
60
</td>
<td>
1
</td>
<td>
0,13
</td>
<td>
40,00 €
</td>
<td>
480,00 €
</td> </tr>
<tr>
<td>
**Development Support**
</td>
<td>
</td> </tr>
<tr>
<td>
New GUI of supported communities / engaged
projects
</td>
<td>
0,5
</td>
<td>
1440
</td>
<td>
720
</td>
<td>
12
</td>
<td>
1,5
</td>
<td>
480,00 €
</td>
<td>
5.760,00 €
</td> </tr>
<tr>
<td>
**TOTAL ADMINISTRATION COSTS (€)**
</td>
<td>
**4375**
</td>
<td>
**72,92**
</td>
<td>
**9,11**
</td>
<td>
**3.396,67**
</td>
<td>
**40.760,00**
</td> </tr>
<tr>
<td>
**PORTAL MAINTENANCE**
</td>
<td>
# /
month
</td>
<td>
time
(min)
</td>
<td>
total time
(min)
</td>
<td>
total time (hours)
</td>
<td>
days
</td>
<td>
**Cost /**
**month**
</td>
<td>
**YEARLY**
</td> </tr>
<tr>
<td>
Technical support (bug fixing, cashing and
clearance steps, upgrade if necessary, restart)
</td>
<td>
3
</td>
<td>
240
</td>
<td>
720
</td>
<td>
12
</td>
<td>
1,5
</td>
<td>
480,00 €
</td>
<td>
5.760,00 €
</td> </tr>
<tr>
<td>
Infrastructure maintainance
</td>
<td>
3
</td>
<td>
240
</td>
<td>
720
</td>
<td>
12
</td>
<td>
1,5
</td>
<td>
480,00 €
</td>
<td>
5.760,00 €
</td> </tr>
<tr>
<td>
**TOTAL (€)**
</td>
<td>
</td>
<td>
</td>
<td>
**1440**
</td>
<td>
**24**
</td>
<td>
**3**
</td>
<td>
**960,00 €**
</td>
<td>
**11.520,00**
</td> </tr>
<tr>
<td>
Hosting
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
**200,00 €**
</td>
<td>
**2.400,00 €**
</td> </tr>
<tr>
<td>
**TOTAL MAINTENANCE COSTS (€)**
</td>
<td>
**13.920,00**
</td> </tr>
<tr>
<td>
**TOTAL ANNUAL COSTS FOR RUNNING THE OSOS PORTAL & SUPPORT MECHANISMS (€) **
</td>
<td>
**54.680,00**
</td> </tr> </table>
### Table 21: OSOS Annual Administration & Maintenance Costs
<table>
<tr>
<th>
**4**
</th>
<th>
**Data security**
</th> </tr> </table>
The security & privacy for the interacting users of the platform including all
relevant stakeholders are covered from various aspects and levels throughout
all levels of portal’s infrastructure and platform:
* **Platform:** This level targets on the seamless access of the users in the individual components of the portal services, supporting end enhancing the service in an advanced and secure manner.
* **Service / sub-system:** This level of security regards the distributed authorization and auditing processes by the individual sub-systems of the platform and allows the controlled access and guidance of the users on the available services
* **Entities organization:** This level provides the controlled access and follows the privacy mechanisms on the Schools, teachers and communities /groups data that are ensured by the organization of the relations among these entities. These levels of organization are totally respected by the implementation and provision of the relevant services and navigation of the users throughout the portal.
* **Community-based:** Due to the collaborative nature of the services provided through the grow of extended community networks, privacy settings are supported also at community level, in order to protect the privacy settings, set by the community managers on the users’ activities and content developed under the communities. The “open” and “private” modes are established as options and respected in all levels of presentation and access of the communities’ content.
* **Users’ profile-based:** The users / entities that are registered and profiled to the portal manage the partially access on their personal / public data based on their preferences and self-defined restrictions
* **Content contextual description & IPRs: ** Access on the individual educational resources, tools and other content provided in the portal follow and cover the IPRs set by their authors/creators, as these are set using the contextual descriptions of them.
Regarding the privacy policy that is followed in the portal and its special
conditions are presented in the relevant policy statement in **Appendix 7.1**
and it will be online provided for the acknowledgement to the users.
# 5 Ethical aspects & Other Issues
## 5.1 Ethical Aspects
The OSOS project will comply with data protection acts, directives, and
opinions, both at European and at National level. These include: (i) Directive
95/46/EC of the European Parliament and the Council of 24 October 1995 on the
protection of individuals with regard to the processing of personal data and
on the free movement of such data, (ii) The Charter of Fundamental Rights of
the EU, specifically the article concerning the protection of personal data,
(iii) UK Data protection act 1998, reviewing the rights of UK citizens to
access personal information, (iv) The opinions of the European Group on Ethics
in Science and New Technologies in their report “Citizen Rights and New
Technologies: A European Challenge” on the Charter on Fundamental Rights
related to technological innovation. In particular, recommendations related to
ICT concerning data protection and individuals’ freedom and autonomy will be
taken into account. The Project Coordinator will ensure compliance with such
legislation. The Consortium notes that the proposed research and activities of
the project involve one area highlighted of the Ethical Issues Table: i.e. the
involvement of children.
Furthermore, OSOS will engage young people, aged 10-18, in science education
activities. Parental consent will be required for the children to participate
in the project. A series of procedures will therefore be set in motion to
secure privacy, security and ethical conduct in all our field and evaluation
studies. Specifically:
* All researchers and mentors involved in school-based activities, or working directly with children outside school, will be required to follow all national procedures for verifying fitness to access school premises.
* All researchers will be required to have school-verified identification and a school liaison person available at all times during school visits.
* All researchers will be aware of essential health and safety issues concerning students on school premises.
* Parental consent will be obtained for each step in the activity (e.g. children’s involvement in evaluation studies, videos, photographs).
## 5.2 Other data management procedures
**Handling of user personal data:** The project consortium will devise
mechanisms that comply with the rules relating to the protection of personal
data, as described in Directive 95/46/EC. This Directive regulates the
processing of personal data and stipulates, among other things, the following:
* The data subject has the right to be informed when his/her personal data are being processed
* Data may be processed only under the following circumstances (Art 7):
* When the data subject has unambiguously given his/her consent.
* When the processing is necessary for the performance of or the entering into a contract.
* When processing is necessary for compliance with a legal obligation.
* When processing is necessary in order to protect the vital interests of the data subject. o When processing is necessary for the performance of a task carried out in the public interest or in the exercise of official authority vested in the controller or in a third party to whom the data are disclosed.
* When processing is necessary for the purposes of the legitimate interests pursued by the controller or by the third party or parties to whom the data is disclosed, except where such interests are overridden by the interests for fundamental rights and freedoms of the data subject.
In summary, in all cases the project undertakes to ensure that all data used
within it will be:
* fairly and lawfully processed
* processed for limited purposes
* adequate, relevant and not excessive
* accurate
* not kept longer than necessary
* processed in accordance with the data subject's rights, both locally and within EU legislation • secure
* not transferred to countries without adequate protection.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1263_InSPIRES_741677.md
|
Introduction of scholar at risk in webpage (M6-7) D4.3 public in webpage (1
Month after approval)
</th> </tr> </table>
T4.3 To implement four
Transdisciplinary and
Transnational Science Shop projects to respond “glocally” to societal grand
challenges and facilitate the exchange of students internationally.
Define four None - IrsiCaixa and ISGlobal, Description of the TT RRI OS
Transdisciplinary with inputs from all Projects in website (M32-33) and
Transnational partners D4.2 public in webpage (1
RRI and Open Month after approval)
Science projects
D4.2
<table>
<tr>
<th>
T4.4
</th>
<th>
To organize an open call to spread out InSPIRES good practices and implement 5
TT SS projects
</th>
<th>
Define open call evaluation criteria Selection of winners
D4.1
</th>
<th>
None
</th>
<th>
\-
</th>
<th>
ISGlobal with inputs from all the partners
</th>
<th>
Publication of open call and evaluation indicators in website (M22-23)
Publication of winners in website (M28)
D4.1 public in website public in webpage (1 Month after approval)
</th> </tr> </table>
## 3.5 WP5
The overall aim of this work package is to facilitate the training of new
science shops structures and their steady connection by developing training
sessions, information sessions and pedagogical material tailored for the
relevant stakeholders: actual or future coordinators and mediators, local
communities’ representatives, academic staff/researchers, students and local
authorities. The ambition is to strengthen the growth of science shop models
up to the standards identified by the European Commission, in particular in
terms of RRI, OSc and impact evaluation requirements, and contributing to
tackle current grand challenges.
<table>
<tr>
<th>
Task Name of task Outputs Data generated
</th>
<th>
Kind of data
</th>
<th>
Responsible partners
</th>
<th>
MGMT strategy
</th> </tr>
<tr>
<td>
WP5 - Training activities to strengthen the growth of science shops
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
T5.1 To create common basic Training workshop None - UdL, VU, ESSRG and Brief
report on common knowledge among all the during Kick-off UNIFI definitions in
website. (M3-4) project partners meeting
<table>
<tr>
<th>
T5.2
</th>
<th>
To organize two “on the field” international schools in Tunisia and Bolivia,
to strengthen the participatory research and innovation practices beyond
Europe’s borders
</th>
<th>
School in Tunis
School in Bolivia
Training materials
</th>
<th>
None
</th>
<th>
\-
</th>
<th>
UdL, VU, ESSRG and ISGlobal
</th>
<th>
Report on schools in website: Tunisia (M22-23) and Bolivia (M24-25).
Training materials available in website and other repositories (RRI-tools,
scientix, etc) (M26-27)
</th> </tr> </table>
T5.3 To launch two European summer
<table>
<tr>
<th>
T5.4
</th>
<th>
Co-development of an accessible an interactive eLearning platform to develop
and strengthen a “global civil society”
</th>
<th>
Videos in 4 languages
First release in M28
Second release in
M39
D5.2
</th>
<th>
None
</th>
<th>
\-
</th>
<th>
UdL, VU, ESSRG and UNIFI, with inputs from all partners
</th>
<th>
Announcement on releases in website (M28 and 29) Hosting of videos in website
(M29-30).
MOOC in platforms
(Coursera…) (M36)
</th> </tr> </table>
International schools to school in Budapest strengthen the International
winter participatory research school and innovation practices Training
materials beyond Europe’s borders
None - UdL Report on schools in website:
Budapest (M18-19) and final school (M46-47) Calls to schools in website and
other related webs (M40).
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
D5.2 public in website (1 month after approval)
</th> </tr>
<tr>
<td>
T5.5
</td>
<td>
Joint production of videos and case studies based on the SS2.0, including the
transnational ones
</td>
<td>
Videos in 4 languages
Code for voice over
(deaf)
Language sign addition D5.1
</td>
<td>
None -
</td>
<td>
UdL, with inputs from all partners
</td>
<td>
Hosting of videos in website (M29-30).
Code for voice over available
(M31-32)
D5.1 public in website (1 month after approval)
</td> </tr> </table>
## 3.6 WP6
This WP led by ISGlobal proposes to review, improve and provide an impact
evaluation approach to deliver the evidence and increase the effectiveness and
impact of SS in the society. This WP aims to improve the impact evaluation
approach of previous SS, harmonize and integrated the impact evaluation
methodology in SS projects. This new impact evaluation approach will also
offer a clear guidance to evaluate SS in different geographical and sectorial
contexts, helping to identify and co-create with end-users relevant indicators
of process and result to monitoring and evaluate the impacts before and after
the implementation of the SS.
<table>
<tr>
<th>
Task Name of task
</th>
<th>
Outputs
</th>
<th>
Data generated
</th>
<th>
Kind of data
</th>
<th>
Responsible partners
</th>
<th>
MGMT strategy
</th> </tr>
<tr>
<td>
WP6 – Impact Evaluation
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
T6.1
</td>
<td>
Review the impact evaluation methodologies used in previous Science Shop
models and compare the impact that SS projects have had on communities,
teaching, training, research, and innovation in collaboration with WP2
</td>
<td>
Review of previous impact evaluation methods for SS. D6.2 (with T6.2 and
T6.3)
</td>
<td>
None
</td>
<td>
\-
</td>
<td>
ESSRG, with input from ISGlobal, VU and UdL
</td>
<td>
Summary of review in website (M6-7)
D6.2 public in website (1 month after approval)
</td> </tr> </table>
<table>
<tr>
<th>
T6.2
</th>
<th>
Develop, validate and integrate, in
collaboration with WP3,
a new impact evaluation methodology for RRI and Open Science Shops.
</th>
<th>
New impact evaluation methodology – D6.1
D6.2 (with T6.1 and
T6.3)
</th>
<th>
None
</th>
<th>
\-
</th>
<th>
ISGlobal, with input from all the partners
</th>
<th>
Brief summary of new methods in website (M12-13). D6.1 and D6.2 public in
website (1 month after approval)
</th> </tr> </table>
T6.3 Compare impact D6.2 (with T6.1 and T6.3d1 –
evaluation T6.2) questionnaires methodologies and T6.3d2 – results between new
Interviews and traditional models transcripts
of SS’s and T6.3d3 – Focus geographical areas in groups collaboration with WP4
transcripts T6.3d1 – QT ISGlobal with input D6.2 public in website (1
T6.3d2 and d3 from all the partners month after approval)
\- QL Summary of D6.2 results in
website (M27-28) T6.3d1 – annonymised + consent signed (in questionnaire) +
stored in Zenodo, public after approval of D6.2 T6.3d2 and d3 - consent signed
+ annonymised + stored in Zenodo, public after
<table>
<tr>
<th>
T6.4
</th>
<th>
Assess learning and training process in WP5
</th>
<th>
Evaluation of training activities
</th>
<th>
T6.4d1 – questionnaires
T6.4d2 – Interviews transcripts T6.4d3 – Focus groups transcripts
</th>
<th>
T6.4d1 – QT
T6.4d2 and d3
\- QL
</th>
<th>
VU, with input from ISGlobal
</th>
<th>
Summary of results in website (M46-47). T6.4d1 – annonymised + consent signed
(in
questionnaire) + stored in
Zenodo, public
T6.4d2 and d3 - consent signed + annonymised + stored in Zenodo, public
</th> </tr> </table>
<table>
<tr>
<th>
T6.5
</th>
<th>
Assess the communication
activities in
collaboration with WP7
</th>
<th>
Evaluation of communication
activities
</th>
<th>
T6.5d1 – questionnaires
T6.5d2 – Interviews transcripts
</th>
<th>
T6.5d1 – QT
T6.5d2 and d3
\- QL
</th>
<th>
UNIFI, with input from ISGlobal
</th>
<th>
Summary of results in website (M40).
T6.5d1 – annonymised + consent signed (in questionnaire) + stored in
</th> </tr> </table>
D6.2 approval
<table>
<tr>
<th>
T6.5d3 – Focus groups transcripts
</th>
<th>
Zenodo, public
T6.5d2 and d3 - annonymised
\+ consent signed (in questionnaire) + stored in
Zenodo, public
</th> </tr> </table>
## 3.7 WP7
This horizontal WP led by ISGlobal will cover the communication, dissemination
and exploitation activities for the project, including the development of a
plan for internal and external project communication; the development of
communication tools; the dissemination of the project results, and very
important the development of an exploitation plan in order to ensure long term
running of the SS activities. This horizontal WP will be fundamental to raise
awareness of these research practices within the civil society, as well as
other RRI stakeholders.
InSPIRES will be mainly operating in territories where such practices have not
yet been developed.
<table>
<tr>
<th>
Task Name of task Outputs
</th>
<th>
Data generated
</th>
<th>
Kind of data
</th>
<th>
Responsible partners
</th>
<th>
MGMT strategy
</th> </tr>
<tr>
<td>
WP7 – Communications, dissemination and exploitation
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
T7.1
</td>
<td>
Internal and External Communication plan and tools
</td>
<td>
Communication Plan Website (accessible and with a section of
“Submit your question”)
D7.1
</td>
<td>
None
</td>
<td>
\-
</td>
<td>
ISGlobal, with inputs of all partners
</td>
<td>
Communication strategy summary in website (M8) D7.1 public in website (M48)
</td> </tr>
<tr>
<td>
T7.2
</td>
<td>
Dissemination
</td>
<td>
Communication actions
</td>
<td>
None
</td>
<td>
\-
</td>
<td>
ISGlobal, with inputs of all partners
</td>
<td>
\-
</td> </tr>
<tr>
<td>
T7.3
</td>
<td>
Exploitation and Sustainability
</td>
<td>
IPR strategy D7.1
</td>
<td>
None
</td>
<td>
\-
</td>
<td>
ISGlobal, with inputs of all partners
</td>
<td>
D7.1 public in website (M48)
</td> </tr>
<tr>
<td>
T7.4
</td>
<td>
Intermediate and Final Conferences
</td>
<td>
Intermediate conference – M15 Final conference – M46
</td>
<td>
None
</td>
<td>
\-
</td>
<td>
ISGlobal, with inputs of all partners
</td>
<td>
Conference calls in website
(M13 and M 44)
Conference reports in website
(M16 and M 47)
</td> </tr> </table>
# 4\. Authorship policy
The following policy of authorship aims at anticipating and mitigating
misunderstandings around the publication of dissemination and exploitation
materials. It emerges from leading journals recommendations and evidence from
systematic review of authorship across research disciplines (Dickersin, 2002;
Marušić, 2011).
If possible, InSPIRES publications should be published in Open Access journals
to make the results available to as many as possible. With regard to
dissemination the consortium should aspire to find “inclusive” solutions in
authorship rather than to be “exclusive”.
This policy is divided in three main topics: recognition of InSPIRES,
collective authorship and individual authorship:
* Recognition of InSPIRES: Every publication emerging from InSPIRES or its satellite Science Shops projects must acknowledge the InSPIRES projects and EC funding according to the dissemination plan rules and must quote the following sentence “This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 74167”. In some journals it is possible to use collective authorship in form of “The InSPIRES Consortium” or “N.N. _and_ the InSPIRES Consortium” where all members of the consortium are included. Whenever possible, this formula might be explored. Note that this kind of authorship might require signatures or an authorship declaration from the entire group and all group members should approve the final version before publication.
* Collective authorship: collective authorship might be appropriate for dissemination materials such as posters or presentations describing the overall project. In such cases, the formula “The InSPIRES Consortium” must be used. Collective authorship is appropriate when the intellectual work underpinning a publication has been carried out by a group, and no one person can be identified as having substantially greater responsibility for its contents than others. When one or several members of the project are presenting the project (in a conference, e.g.), the by-line would read “N.N _on behalf of_ the InSPIRES Consortium”.
* Individual authorship: according to the International Committee of Medial Journal Editors suggestions, those identified as authors on a scientific manuscript to be submitted to publication must meet the following three conditions:
* Substantial contributions to conception and design, OR acquisition of data, OR analysis and interpretation of data.
* Drafting the article or revising it critically for important intellectual content. o Final approval of the version to be published.
Participation solely in the collection of data is insufficient by itself and
those persons who have contributed in the article but whose contributors do
not justify authorship may be acknowledge and their contribution described.
The order of authors in the manuscript will be proposed by the lead
publication responsible, and evaluated by a Publication Executive Board (PEB).
The lead author will propose, in consultation with the leader of the relevant
work package (if that is a different person), an ordered list of authors to be
named on the manuscript. In general, the order of authors should be
established based on the relevance of each author contribution, according to
the journal discipline. In the event of disagreement about authorship, the PEB
will adjudicate.
The first author will normally be the person who has made the largest
substantial intellectual contribution to the work. The first author will also
normally be expected to coordinate the circulation, editing, submission and
revision of the manuscript. The first author will circulate the final version
of the manuscript to the PEB at least a week before submitting it for
publication so that the board is aware of the manuscript, its contents and
authorship and has the opportunity to comment. Non-response from the members
of the PEB may be taken to indicate assent to proceed. If possible, InSPIRES
investigators who do not qualify for authorship will be listed separately
under a heading such as Contributors or Acknowledgements, as will other
members of the InSPIRES team who have made non-authorial contributions to the
manuscript.
Ensuring quality is essential to the good name of InSPIRES. Authors are
encouraged to share drafts and articles with their InSPIRES colleagues for
comment and feedback, or simply as a way of sharing their work. Published
articles and abstracts should be sent to the Coordination Team and
Dissemination WP leader.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1266_Beacon_745942.md
|
# 1\. Introduction and background
The main focus of the Beacon Project is modelling, and mainly existing data
from ongoing or previous experiments will be (re)used. This data has been
assembled and evaluated in work package 2 and will be used in work packages 3
and 5. The database with the assembled data is a part of Beacon deliverable
D2.2.
It is envisaged that there will be need to complement existing data and
therefore Beacon has a work package (WP4) for experimental work. The
experiments performed in the project will be
* complementary where the existing data is not sufficient
* only laboratory scale
This Data Management Plan concerns both experimental data, models and input
and output data.
Good research data management is a key conduit leading to knowledge discovery
and innovation, and to subsequent data and knowledge integration and reuse.
Instructions from the Commission regarding data management can be found here
_http://ec.europa.eu/research/participants/docs/h2020-fundingguide/cross-
cutting-issues/open-access-data-management/data-management_en.htm_ and here
_http://ec.europa.eu/research/participants/data/ref/h2020/grants_manual/hi/oa_pilot/h2020-hi-
oa-datamgt_en.pdf_
# 2\. Purpose
The purpose with this plan for the management of Beacon Project data is to
support traceability, availability and quality assured handling of the data
produced within the frames of the Beacon project, and to the extent it is
possible, realistic and relevant it should support in making the research data
in the Beacon project “ _findable_ , _accessible_ , _interoperable_ and
_reusable_ ( **FAIR** )” as it is formulated in the H2020 online manual.
# 3\. Delimitations
This DMP does not concern the data assembled in WP2 from other projects for
reuse in the project. This reused data is documented in an excel database,
published together with the Beacon deliverable D2.2.
# 4\. Summary of the guidelines
Data and models produced, generated and/or developed in Beacon have to be
saved, stored and backed up in a safe way for _at least 5 years_ after the
Beacon project is finalised.
The Commission requires that data is kept “ _Findable_ , _Accessible_ ,
_Interoperable_ and _Reusable_ ( **FAIR** )”, to the extent it is possible,
during and after the project. (Explanations to these concepts available
further down)
Partners should normally use their own procedures and management systems to
manage data and models.
If such procedures are lacking use the below guidelines.
In case the dataset cannot be shared, the reasons for this should be
mentioned (e.g. ethical, rules of personal data, intellectual property,
commercial, privacy-related, security-related).
# 5\. Guidelines
## 5.1 General
Data from experiments, models, the input data used and output data generated
in the Beacon project need to be stored, findable, accessible, and as far as
possible interoperable and reusable (FAIR) for at least 5 years after the
Beacon project is finalised.
Data types in Appendix 3.
The **participants that have a management system for handling data and
information** that lives up to the relevant applicable parts of the guidelines
in section 4 should follow that and inform the rest of the consortium how the
data of the experiments and modelling tasks are managed through filling out
the essential parts appendix 1 (modified if appropriate) and uploading it to
the relevant folder in the Beacon Projectplace. The Beacon Projectplace can
also be used to store project data.
The **participants that do not** have a management system for handling data
and information that lives up to the relevant parts of the guidelines in
section 4 will follow the below described guidelines and can use attachment
1(modified if appropriate) to describe data. Attachment 2 can be used as a
template when planning experiments and modelling tasks where data will be
produced.
In an experiment or modelling plan with an appropriate scope the relevant and
applicable parts of the following issues should be addressed. Appendix 1 and 2
can be used as templates. The partner(s) performing the experiments or
modelling and thereby producing the data are responsible for managing the
data.
_**Generally, the procedures normally applied within the organisation can and
should be used.** _
## 5.2 Experiment data summary
State/explain/specify:
* the purpose of the data collection/generation
* the relation to the objectives of the project
* the types and formats of data generated/collected
* if existing data is being re-used (if any) and if so the origin of the data
* the expected size of the data (if known)
* to whom will the data be useful
## 5.3 Modelling, Input Data and Output data
It is the responsibility of the Party performing the modelling that the models
and codes used are quality assured, validated and traceably described.
Modelling files will be
* (compressed if appropriate) saved and stored in a safe and findable way
* backed up,
* together with o a description of which version of the model/calculation code has been used for which modelling/calculation
* Input Data
* Output Data
* (if relevant: which compression tool, inc version, has been used)
## 5.4 FAIR data according to H2020 guidelines
(also assembled in appendix 4)
**FAIR=** Findable, Accessible, Interoperable and Reusable
### Making the data Findable (inc. provisions for metadata (information about
the data) [FAIR] The discoverability of data should be outlined (metadata
provision)
The identifiability of data should be outlined and standard identification
mechanism referred to. Do you make use of persistent and unique identifiers
such as **Digital Object Identifiers** ? Naming conventions used should be
outlined. The approach towards search keyword should be outlined. The approach
for clear versioning should be outlined. Standards for metadata (information
about the data) creation (if any) should be specified.
_Guidance_ :
The Research Data Alliance provides a _Metadata Standards Directory_ that can
be searched for discipline-specific standards and associated tools.
### Making data openly Accessible [FAIR]
It should be specified which data will be made openly available, and how. If
some data is kept closed provide rationale for doing so.
_Guidance_ : Follow the principle " **as open as possible, as closed as
necessary** ". The Commission recognises that there are good reasons to keep
some or even all research data generated in a project closed. Where data need
to be shared under restrictions, explain why, clearly separating legal and
contractual reasons from voluntary restrictions. It is possible for specific
beneficiaries to keep their data closed if relevant provisions are made.
The _Registry of Research Data Repositories_ provides a useful listing of
repositories that you can search to find a place of deposit. If you plan to
deposit in a repository, it is useful to explore appropriate arrangements with
the identified repository in advance. What methods or software tools are
needed to access the data? Is documentation about the software needed to
access the data included?
Is it possible to include the relevant software (e.g. in open source code)?
Specify where the data and associated metadata (information about the data),
documentation and code are deposited.
Specify how access will be provided in case there are any restrictions. For
example is there a need for a data access committee.
### Making data Interoperable [FAIR]
Assess the interoperability of your data. Specify what data and metadata
(information about the data) vocabularies, standards or methodologies you will
follow to facilitate interoperability.
_Guidance_ :
Interoperability means allowing data exchange and re-use between researchers,
institutions, organisations, countries, etc. (i.e. adhering to standards for
formats, as much as possible compliant with available (open) software
applications, and in particular facilitating re-combinations with different
datasets from different origins.
Specify whether you will be using standard vocabulary for all data types
present in your data set, to allow inter-disciplinary interoperability? If
not, will you provide mapping to more commonly used ontologies?
## _Increase data Re-use (through clarifying licenses) [FAIR]_
Specify how the data will be licenced ( **if applicable** ) to permit the
widest reuse possible. Specify whether the data produced and/or used is
useable by third parties, in particular after the end of the project. If the
re-use of some data is restricted, explain why. Specify the length of time for
which the data will remain re-usable.
_Guidance_ :
The _EUDAT B2SHARE_ tool includes a built-in license wizard that facilitates
the selection of an adequate license for research data. Specify when the data
will be made available for re-use. If applicable, specify why and for what
period a data embargo is needed. Reasons for embargoes may include time to
publish or seek patents. If an embargo is sought, specify why and for how
long, bearing in mind that research data should be made available as soon as
possible.
**6\. Quality assurance of data**
Data in the project should be quality assured. Describe or refer to the
processes used in
# 7\. Allocation of resources for open access of data
Costs related to open access to research data are eligible as part of the
Horizon 2020 grant (if compliant with the Grant Agreement conditions). Costs
are eligible for reimbursement during the duration of the project under the
conditions defined in the H2020 Grant Agreement, in particular _Article 6_ a
nd _Article 6.2.D.3_ but also other articles relevant for the cost category
chosen. If applicable describe costs and potential value of long term
preservation
# 8\. Data security
Address:
* data recovery
* if applicable: secure storage and transfer of sensitive data
Open data can be stored in Beacon Projectplace. Regarding sensitive data;
bring the issue to the Executive board for discussion. If necessary it should
be safely stored in certified repositories for long term preservation and
curation.
# 9\. Ethical aspects
To be covered in the context of the ethics review, ethics section of DoA and
ethics deliverables. Include references and related technical aspects if not
covered by the former.
_Guidance_ :
Consider whether there are any ethical or legal issues than can have an impact
on data sharing. For example, is informed consent for data sharing and long
term preservation included in questionnaires dealing with personal data?
# 10\. National/funder/sectorial/departmental procedures
(If Applicable) Refer to other national/funder/sectorial/departmental
procedures for data management that you are using (if any).
## APPENDIX 1: Data management description template
Upload to the Beacon Projectplace
<table>
<tr>
<th>
**Metadata (information about the data)**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
**Dataset reference and/or name**
</td>
<td>
_Unique name identifying the dataset._
_Identifier should start with EU-Beacon-WPx where “x” is the relevant work
package number followed by a three digit number. Example: “EU-BeaconWP4-001”_
</td> </tr>
<tr>
<td>
**Datatype**
</td>
<td>
_Choose one or more of the relevant data types: Experimental Data,_
_Observational Data, Raw Data, Derived Data, Physical Data (samples), Models,
Images, Protocols, Input Data and Output Data. Data types are further
described in Appendix 3._
</td> </tr>
<tr>
<td>
**Source**
</td>
<td>
_Source of the data. Reference should include work package number, task number
and the main project partner or laboratory which produced the data._
</td> </tr>
<tr>
<td>
**Dataset description**
</td>
<td>
_Provides a brief description of the data set along with the purpose of the
data and whether it underpins a scientific publication. This should allow
potential users to determine if the data set is useful for their needs._
</td> </tr>
<tr>
<td>
**Standards and information about the data**
</td>
<td>
_Provides a brief description of the relevant standards used and list relevant
metadata (information about the data) in accordance with the description in
Appendix 3._
_The usage of the Directory Interchange Format is optional._
</td> </tr>
<tr>
<td>
**Science Keywords**
</td>
<td>
_List relevant scientific key words to ensure that the data can be efficiently
indexed so others may locate the data._
</td> </tr>
<tr>
<td>
**Data sharing**
</td>
<td>
_Description of how data will be shared both during and after the Beacon
project. Include access procedures, embargo periods (if any), outlines of
technical mechanisms for dissemination and necessary software and other tools
for enabling re-use, and definition of whether access will be widely open or
restricted to specific groups._
_Information should include a reference to the repository where data will be
stored._
__In case the dataset cannot be shared_ , the reasons for this should be
mentioned (e.g. ethical, rules of personal data, intellectual property,
commercial, privacy-related, security-related). _
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
_Description of the procedures that will be put in place for long-term
preservation of the data._
</td> </tr> </table>
## APPENDIX 2: Data set template
**Metadata (information about the data)**
**Dataset reference** **and/or name**
**Source**
**Dataset description**
**Standards and**
**metadata (information about the data)**
**Data sharing**
**Archiving and**
**preservation**
**APPENDIX 3: Data types**
# 1 Experimental Data
## Dataset description
The experimental data originate from measurements performed in a laboratory
environment, be it _in situ_ or _ex situ_ . The data comprise point or
continuous numerical measurements.
The data will be collected either on a sample basis (sampling an experiment at
a certain point in time) or on an experiment scale (without sampling the
experimental set-up). Data can be derived from either destructive of
preservative analyses. Experimental data collection can occur automatically or
manually, and will be available in a digital or a hard copy format. In the
case of the latter, experimental data will first be copied to e.g. a lab book
and then digitized.
Experimental data are supposed to be unique, in the way that new experiments
will be set-up, producing fresh data. In some cases, similar data will be
available from previous/other experiments within the project, within the
partners’ institution or from overlapping projects, allowing comparison and
integration of the newly obtained data.
## Standards and information about the data
Experimental data are obtained using standardized laboratory techniques which
are calibrated when applicable. Positive and negative controls are used and
standards, internal or external, are introduced.
Summary information about the data (metadata) can optionally be provided
according to a Directory Interchange Format (DIF). A DIF allows users of data
to understand the contents of a dataset and contains those fields which are
necessary for users to decide whether a particular dataset would be useful for
their needs.
# 2 Observational Data
## Dataset description
Observational research (or field research) is a type of correlational (i.e.,
non-experimental) research in which a researcher observes ongoing behaviour.
## Standards and information about the data
The information (metadata) regarding observational data should include any
standards used and the necessary information so that an external researcher
has the possibility to analyse how the data was gathered.
# 3 Raw Data
## Dataset description
_Raw data_ are primary data collected from a source, not subjected to
processing or any other manipulation.
Raw data are derived from a source, including analysis devices like a
sequencer, spectrometer, chromatograph etc. In most cases, raw data are
digitally available. In some cases (e.g. sequencing), the raw data will be
very extensive datasets.
Raw data has the potential to become information after extraction,
organization, analysis and/or formatting. It is therefore used as input for
further processing.
## Standards and information about the data
Raw data are obtained using standardized laboratory techniques which are
calibrated when applicable. Positive and negative controls are used and
standards, internal or external, are introduced.
Metadata (information about the data) should at least include standards,
techniques and devices used. It can optionally be provided according to a DIF.
A DIF allows users of data to understand the contents of a dataset and
contains those fields which are necessary for users to decide whether a
particular dataset would be useful for their needs.
# 4 Derived Data
## Dataset description
Derived data are the output of the processing or manipulation of raw data.
Derived data originate from the extraction, organization, analysis and/or
formatting or raw data, in order to derive information from the latter. In
most cases, derived data are digitally available, as are the raw data. Derived
data will allow for the interpretation of laboratory experiments, e.g. through
statistical analysis or bioinformatics processing.
## Standards and information about the data
Manipulation of data will be performed using a ‘scientific code of conduct’,
i.e. maintaining scientific integrity and therefore not falsifying the output
or its representation.
Information about the data (metadata) should include any standard or method or
best practice used in the analysis. It can optionally be provided according to
a Directory Interchange Format. A DIF allows users of data to understand the
contents of a dataset and contains those fields which are necessary for users
to decide whether a particular dataset would be useful for their needs.
# 5 Physical Data (samples)
## Dataset description
Physical data are samples that have been produced by an experiment or taken
from a given environment. Sampling of an environment or experiment is
performed in order to obtain information through analyses. As such,
experimental, raw and or derived data will be obtained from physical data.
When the analyses are destructive, the samples cannot be stored for later use.
When the analyses are preservative, samples can be stored for later use, but
only for a limited time.
## Standards and information about the data
When sampling an environment or experiment, blank samples are taken as well,
as a reference. Information that should be included about the data (metadata)
are description of the origin of the sample, age, processing, storage
conditions and expected viability of the sample (as some sets of samples can
only be stored for a limited time, due to their nature).
# 6 Images
## Dataset description
Imaging data are optical semblances of physical objects.
Objects of macro- and microscopic scale can be imaged in a variety of ways
(e.g. photography, electron microscopy), enabling the optical appearance to be
captured for later use or for sharing. When required, the optical appearance
can be magnified (e.g. microscopy) and manipulated to enable the
interpretation of the objects (mostly samples from an environment or
experiment). Imaging data support the interpretation of other data, like
experimental data. Some imaging data will be raw data (3.3), which need to be
derived through image processing to enable interpretation.
## Standards and information about the data
Advanced imaging devices are calibrated to ensure prospering visualization.
Information about the data (metadata) which are provided are time of imaging,
device settings and magnification/scale when appropriate. In addition,
information will be provided about the object that is being imaged.
# 7 Protocols
## Dataset description
A protocol is a predefined written procedural method in the design and
implementation of experiments or sampling. In addition to detailed procedures
and lists of required equipment and instruments, protocols often include
information on safety precautions, the calculation of results and reporting
standards, including statistical analysis and rules for predefining and
documenting excluded data to avoid bias.
## Standards and information about the data
Protocols enable standardization of a laboratory method to ensure successful
replication of results by others in the same laboratory or by partners’
laboratories.
Information (metadata) that should be included for Protocols are the purpose
of the protocols, references to standards and literature.
# 8 Models, Input Data and Output Data
It is the responsibility of the Party performing the modelling that the models
and codes used are quality assured, validated and traceably described.
Modelling files will be zipped and saved, stored in a safe and findable way,
and backed up, together with a description of which version of the
model/calculation code has been used for which modelling/calculation, as well
as Input Data and Output Data.
**APPENDIX 4:**
**FAIR Data Management at a glance: issues to cover in your Horizon 2020 DMP**
This table provides a summary of the Data Management Plan (DMP) issues to be
addressed, as outlined above.
<table>
<tr>
<th>
**DMP component**
</th>
<th>
</th>
<th>
**Issues to be addressed, can be very short. State N/A when relevant**
</th> </tr>
<tr>
<td>
**1\. Data summary**
</td>
<td>
</td>
<td>
State the purpose of the data collection/generation
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
Explain the relation to the objectives of the project
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
Specify the types and formats of data generated/collected
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
Specify if existing data is being re-used (if any)
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
Specify the origin of the data
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
State the expected size of the data (if known)
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
Outline the data utility: to whom will it be useful
</td> </tr>
<tr>
<td>
2. **FAIR Data**
2.1. Making data findable, including provisions for metadata
(information about the data)
</td>
<td>
</td>
<td>
Outline the discoverability of data (metadata (information about the data)
provision)
Outline the identifiability of data and refer to standard identification
mechanism. Do you make use of persistent and unique identifiers such as
Digital Object Identifiers?
Outline naming conventions used
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
Outline the approach towards search keyword
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
Outline the approach for clear versioning
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
Specify standards for metadata (information about the data) creation (if any).
If there are no standards in your discipline describe what type of metadata
(information about the data) will be created and how
</td> </tr>
<tr>
<td>
2.2 Making data openly accessible
</td>
<td>
</td>
<td>
Specify which data will be made openly available? If some data is kept closed
provide rationale for doing so
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
Specify how the data will be made available
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
Specify what methods or software tools are needed to access the data? Is
documentation about the software needed to access the data included? Is it
possible to include the relevant software (e.g. in open source code)?
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
Specify where the data and associated metadata (information about the data),
documentation and code are deposited
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
Specify how access will be provided in case there are any restrictions
</td> </tr> </table>
Beacon
D8.1 – Beacon Data Management Plan
Dissemination level: PU
Date of issue: **19/03/2018**
<table>
<tr>
<th>
2.3. Making data interoperable
</th>
<th>
</th>
<th>
Assess the interoperability of your data. Specify what data and metadata
(information about the data) vocabularies, standards or methodologies you will
follow to facilitate interoperability.
</th> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
Specify whether you will be using standard vocabulary for all data types
present in your data set, to allow inter-disciplinary interoperability? If
not, will you provide mapping to more commonly used ontologies?
</td> </tr>
<tr>
<td>
2.4. Increase data re-use (through clarifying licences)
</td>
<td>
</td>
<td>
Specify how the data will be licenced to permit the widest reuse possible
Specify when the data will be made available for re-use. If applicable,
specify why and for what period a data embargo is needed
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
Specify whether the data produced and/or used in the project is useable by
third parties, in particular after the end of the project?
If the re-use of some data is restricted, explain why
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
Describe data quality assurance processes
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
Specify the length of time for which the data will remain re-usable
</td> </tr>
<tr>
<td>
3\. Allocation of resources
</td>
<td>
</td>
<td>
Estimate the costs for making your data FAIR. Describe how you intend to cover
these costs
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
Clearly identify responsibilities for data management in your experiment
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
Describe costs and potential value of long term preservation
</td> </tr>
<tr>
<td>
4\. Data security
</td>
<td>
</td>
<td>
Address data recovery as well as secure storage and transfer of sensitive data
</td> </tr>
<tr>
<td>
5\. Ethical aspects
</td>
<td>
</td>
<td>
To be covered in the context of the ethics review, ethics section of DoA and
ethics deliverables. Include references and related technical aspects if not
covered by the former
</td> </tr>
<tr>
<td>
6\. Other
</td>
<td>
</td>
<td>
Refer to other national/funder/sectorial/departmental procedures for data
management that you are using (if any)
</td> </tr> </table>
Beacon
D8.1 – Beacon Data Management Plan
Dissemination level: PU
Date of issue: **19/03/2018**
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1268_COMPAR-EU_754936.md
|
# Introduction
The COMPAR-EU project is part of the HORIZON 2020 Open Research Data Pilot,
the Pilot project of the European Commission (EC), which aims to improve and
maximise access to and reuse of research data generated by projects.
COMPAR-EU is a multimethod, inter-disciplinary project that will contribute to
bridging the gap between current knowledge and practice of self-management
interventions. It aims to identify, compare, and rank the most effective and
cost-effective self-management interventions (SMIs) for adults in Europe
living with one of the four high-priority chronic conditions: type 2 diabetes,
obesity, chronic obstructive pulmonary disease (COPD), and heart failure. It
will provide support for policymakers, guideline developers and professionals
to make informed decisions on the adoption of the most suitable self-
management interventions through an IT platform, featuring decision-making
tools adapted to the needs of a wide range of end users (including
researchers, patients, and industry).
Although COMPAR-EU will largely be based on secondary data extracted from
published randomised-controlled trials, it will also draw on potentially
sensitive health and socioeconomic information from stakeholder consultation
processes including interviews and surveys during the pilot phase.
The Data Management Plan (DMP) of COMPAR-EU describes how the data will be
handled during and after the project, the types of research data that will be
generated or collected during the project, the standards that will be used,
how the research data will be preserved and what parts of the datasets will be
shared for verification or reuse.
Research data linked to exploitable results will be deposited in an open
access repository except they compromise its commercialisation prospects or
have inadequate protection. Related questions will be subject to our internal
IP assessment to ensure background and foreground IP is protected, while
supporting exploitation of research findings.
The DMP is intended to be a living document that will evolve during the
lifespan of the project. It will be updated annually during project lifetime
with the Project Periodic Reports (month 12/24/36/48/60).
# Data summary
The expected types of research data that will be collected or generated along
the project are categorised into eight groups:
_**Table 1. Datasets** _
<table>
<tr>
<th>
**#**
</th>
<th>
**Dataset**
</th>
<th>
**Lead partner**
</th>
<th>
**Related WP(s)**
</th> </tr>
<tr>
<td>
**1**
</td>
<td>
Taxonomy of SMIs
</td>
<td>
FAD
</td>
<td>
WP2
</td> </tr>
<tr>
<td>
**2**
</td>
<td>
Core Outcome Sets (COS)
</td>
<td>
EPF
</td>
<td>
WP2, WP3
</td> </tr>
<tr>
<td>
**3**
</td>
<td>
Summary of evidence of SMIs
</td>
<td>
NIVEL
</td>
<td>
WP2, WP3, WP4
</td> </tr>
<tr>
<td>
**4**
</td>
<td>
Effectiveness of SMIs
</td>
<td>
UOI
</td>
<td>
WP4, WP5
</td> </tr>
<tr>
<td>
**5**
</td>
<td>
Cost-effectiveness of SMIs
</td>
<td>
iMTA
</td>
<td>
WP5, WP6
</td> </tr>
<tr>
<td>
**6**
</td>
<td>
Contextual implementation factors
</td>
<td>
NIVEL
</td>
<td>
WP5, WP7
</td> </tr>
<tr>
<td>
**7**
</td>
<td>
Decision-making tools
</td>
<td>
IR-HCSP
</td>
<td>
WP5, WP6, WP7, WP8
</td> </tr>
<tr>
<td>
**8**
</td>
<td>
Dissemination and exploitation
</td>
<td>
OptiMedis
</td>
<td>
WP9
</td> </tr> </table>
We briefly describe the type of data to be collected or generated within each
group:
### Dataset 1: Taxonomy of SMIs
To develop a draft taxonomy of SMIs, a literature review will be conducted and
taxonomies of good practices from the EU project PRO-STEP 1 will be used.
The use of a taxonomy increases clarity in defining and comparing complex
phenomena and supports the development of a theoretical framework. Experts on
self-management and stakeholders will be consulted via an online survey in
order to refine and validate the draft taxonomy developed through literature
searches. The data set of the online survey, including both labels and data,
will be maintained for the final _COMPAR-EU data dictionary_ . Data collected
by this online survey will not be for public use.
For the literature review, a small database of approx. 40 kB will be created
and stored in an interoperable format to allow re-use irrespective of the
software used in the original application. For each literature review,
including those conducted for Datasets 2 to 8, the same interoperable
principle applies; in addition, we will maintain a log file including the
final search strategy that led to the final selection of references.
Deliverable D2.1 will contain a validated and refined taxonomy of SMIs for
patients with type 2 diabetes, obesity, COPD and heart failure (WP leader:
FAD). The development of the taxonomy of SMIs will be useful to researchers
and guideline developers. It will provide a framework for subsequent research
tasks and will be made publicly available.
### Dataset 2: Core Outcome Set
To develop core outcome sets (COS), we will create an exhaustive database
listing relevant health outcomes using data from previous EU projects (PRO-
STEP and EMPATHiE), existing COS databases such as COMET 2 or the
International Health Outcomes Consortium 3 and searching (via snowballing)
other relevant organisations.
This is the basis for the development of a Delphi consensus process which will
collect patients´ preferences on outcomes. A literature review of qualitative
and quantitative studies on the relative importance patients attach to
outcomes for the relevant conditions will be made. We will use the Confidence
in the Evidence from Reviews of Qualitative research (CERQual) approach. In
addition, a grey literature search (e.g. reports from patient organisations
and projects that have addressed patients´ priorities related to self-
management, patient empowerment, and living well with chronic conditions) will
be performed. To achieve consensus on the final COS, an outcomes
prioritisation workshop will be organised involving equal split between
patients and healthcare professionals. In a modified Delphi consensus process,
four panels of patients and carers will evaluate which outcomes linked to
chronic diseases are considered as most important. In a final workshop,
patients and other stakeholders (healthcare professionals, guideline
developers and researchers) will discuss the final COS to be included in the
project. Any standardised assessment of COS, for example by Delphi consensus
process, will be stored in appropriate databases to support transparency of
the research process. Any non-standardised assessment of COS data, for example
through workshop discussion, will be documented through workshop reports and
minutes, which will be shared amongst participating researchers.
The development of COS will feed into subsequent research phases. Deliverable
D3.1 will contain a report on each condition COS incorporating patients´
preferences (WP leader: EPF). We will report on the development process to
inform other researchers and guideline developers.
### Dataset 3: Summary of evidence of self-management interventions (SMIs)
In this dataset, all necessary data from existing randomised controlled trials
(RCTs) on SMIs will be synthesised for the comparative analyses of SMIs. We
will build on existing information from PRO-STEP and adjust and update it for
COMPAR-EU. We will review around 4000 RCTs from peer-reviewed journals in
PubMed, MEDLINE, and Cochrane library databases. A detailed data extraction
sheet is being prepared by the research team and will form the basis for data
extraction.
As part of our quality assurance process, researchers reviewing the literature
and extracting data will participate in a training workshop. Deliverable D4.1
will contain a report on descriptive findings on the effectiveness of specific
SMIs by each studied outcome and condition and includes information on type
and number of interventions, outcome results, patient characteristics, self-
management components, and comorbidity profiles (WP leader: NIVEL).
This output will be the basis for the quantitative network meta-analysis in
this project. However, the database derived from this major effort of
literature review and data extract potentially informs other research
applications. The standardised dataset including all data extracted and search
strategies in the accompanying literature databases in interoperable format
will be made available to other researchers if approved by the steering
committee.
### Dataset 4: Effectiveness of SMIs
To compare the relative effectiveness of SMIs, network meta-analysis (NMA)
will be applied. NMA statistically combines direct and indirect evidence from
RCTs in a single analysis in order to compare and rank SMIs. For all four
chronic diseases, the relative effectiveness will be estimated for each SMI
and for each component of the outcomes of interest. The confidence in the
output of the analysis will be determined both overall and for each
comparison. In order to perform NMA, data synthesised in Dataset 3 will be
used.
The generated list of ranked SMIs according to their relative effectiveness
organised by condition and outcome measures, which will be our deliverable
D5.1 (WP leader: UOI), might be useful for policymakers, researchers and the
public. The process of producing the ranked list of SMIs (output) will be able
to be traced back to the input “Dataset 3” by providing details on the
statistical protocol.
### Dataset 5: Cost-effectiveness of SMIs
To model the impact of SMIs from the perspective of cost-effectiveness,
existing data will be used by collecting secondary data (i.e. meta-data in
summarised form taken from the literature) and primary data from anonymised
open-access online data sources (such as EU SHARE data). These data are used
to estimate the relationship between patient characteristics and public
expenditures. Dataset 5 combines effectiveness data from the NMA with cost
data from separate literature reviews in order to perform the cost-
effectiveness analysis.
The ranking of SMIs according to their relative cost-effectiveness, which will
be our deliverable D6.1 (WP leader: iMTA), will also inform policymakers,
researchers and the public.
### Dataset 6: Contextual implementation factors
To analyse contextual and implementation factors with the aim to improve the
implementability of SMIs, we will search for qualitative studies on the most
effective SMIs in PubMed and CINAHL database. A qualitative content analysis
will be applied to these studies and outcomes will be synthesised. The aim of
this analysis is to determine the key contextual factors, such as intervention
settings and mechanism that produce specific outcomes.
A framework will be developed for understanding the success of the selected
SMIs for patients with one of the four chronic conditions. Data-analysis and
synthesis will be conducted using a narrative approach initially. Due to the
heterogeneous nature of the studies, thematic analysis will be employed. The
codes/thematic framework structure and outputs of the content analysis will be
documented for further research applications.
A modified Delphi process with a panel of experts on self-management and/or
implementation of healthcare interventions will be organised in order to
prioritise the influence and importance of the contextual factors on a
European level. The items assessed by panel members will be documented in the
projects data dictionary, the raw data will be kept by the project partner for
secondary research use. Participant expertise and knowledge of the healthcare
field can add information that cannot be found in scientific articles but are
of value for daily practice.
Deliverable D7.1 will contain a report on contextual factors for the
implementation of the most effective SMIs for the studied conditions (WP
leader: NIVEL). The generated list of facilitators and barriers of the
implementation of SMIs will inform the design of decision-making tools and
inform the business case development for SMIs.
### Dataset 7: Decision-making tools
Decision-making tools will be developed to facilitate and disseminate the use
of the most effective self-management interventions among key target end users
(patients, healthcare professionals, policymakers, researchers, and SMEs).
Development of decision-making tools
Using the GRADE pro GDT software, we will generate interactive format tables
(iSoF), which presents results per outcome and comparison succinctly, and
interactive evidence to decision framework tabulated (iEtD), which include
draft recommendations/decisions that can be adopted or adapted to different
settings based on the magnitude of desirable and undesirable effects of the
SMIs, stakeholders’ views about outcome importance and
acceptability/feasibility aspects, and information on resource use and cost-
effectiveness. iSof and iEtDs are designed to be used by policymakers and
guidelines developers. For both, a small scale pilot tests in the five
participating countries will be run inviting a group of end users
(policymakers, guidelines developers and researchers according to the type of
tools) to test them and give feedback by a semi-structured online
consultation. We will strive to include at least 60 end users. Questions will
include issues related to content, functionality and applicability,
transferability and scalability (participating partners: OptiMedis, NIVEL,
EPF). Development of patient decision aids
We will generate interactive semiautomatic decision aids based on GRADE
evidence profiles, it consists in plain language summary and, where
applicable, an interactive graphic to be used in the patient physician
interaction. To pilot decision aids, we will organise small focus groups and
user experience tests in simulated settings using techniques such as thinking-
out-loud. For focus groups, we will invite patients living with the condition
and physicians. All focus group discussions will be audio-recorded,
transcribed, and analysed by theme.
The pilot-testing activities and focus groups will be based on previous
material used for similar purposes. Quantitative data resulting from this
assessment will be maintained by the project partner in source format,
qualitative data will be summarised by means of reports and minutes.
Deliverable D8.1 will contain a refined set of evidence profiles and
interactive Summary of
Findings tables, Evidence to Decision frameworks and patient decision aids (WP
leader: IRHSCSP). Subsequently, deliverable D8.2 will be the set-up of the
refined COMPAR-EU platform after piloting (WP leader: IR-HSCSP).
Generated data might be useful to patients, physicians, carers, policymakers
and other stakeholders in the field of self-management and chronic diseases
and will inform the design of dataset 8.
### Dataset 8: Dissemination and exploitation
As part of the comprehensive dissemination, communication, and exploitation
plan to maximise the impact of the COMPAR-EU project, we will conduct reviews
of the grey literature to identify relevant policy and regulatory frameworks
that will influence the adoption of selfmanagement and decision-making tools
(such as WHO´s global strategy on people-centred and integrated health
services, HTA agencies standards on shared-decision-making and selfmanagement,
and eHealth/mHealth strategies). The process for literature review and
maintaining the datasets will be the same as for other datasets. Dataset 8
will also include insight generated through workshops with industry, pharma
and SMEs to identify business opportunities. These data are collected through
in-depth interviews with relevant managers, clinical decision makers,
policymakers and users. Transcripts of these interviews will be maintained in
anonymous format and inform the development of case studies for business
development of SMIs in various health systems. Deliverable D9.2 will contain
business cases for decision-making tools developed (WP leader: OptiMedis AG).
# FAIR Data
In order to make the research data generated by COMPAR-EU findable,
accessible, interoperable and re-usable we apply the FAIR principles 4 .
<table>
<tr>
<th>
**To be Findable:**
F1. (meta)data are assigned a globally unique and persistent identifier
F2. data are described with rich metadata (defined by R1 below)
F3. metadata clearly and explicitly include the identifier of the data it
describes
F4. (meta)data are registered or indexed in a searchable resource **To be
Accessible:**
A1. (meta)data are retrievable by their identifier using a standardized
communications protocol
A1.1 the protocol is open, free, and universally implementable
A1.2 the protocol allows for an authentication and authorization procedure,
where necessary A2. metadata are accessible, even when the data are no longer
available **To be Interoperable:**
I1. (meta)data use a formal, accessible, shared, and broadly applicable
language for knowledge representation.
I2. (meta)data use vocabularies that follow FAIR principles I3. (meta)data
include qualified references to other (meta)data **To be Reusable:**
R1. meta(data) are richly described with a plurality of accurate and relevant
attributes
R1.1. (meta)data are released with a clear and accessible data usage license
R1.2. (meta)data are associated with detailed provenance
R1.3. (meta)data meet domain-relevant community standards
</th> </tr> </table>
## Making data findable, including provisions for metadata
We will create Metadata to describe each of the datasets 1 to 8 as described
above. For each dataset we will maintain unique identified and a log of the
data generating mechanisms and the process of their development, and
accompanying documents, such as data dictionaries to facilitate interpretation
of the dataset (see criteria re-usable). Metadata and any derived documents
such as data codebooks will be registered in updated versions of this
document.
We will use a standard convention to name electronic files, consisting of the
elements PROJECT ACRONYM + Type of document + Name of document + work package
it relates to + partner organisation name + name of research partner, for
example for this DMP:
COMPAR-EU_DELIVERABLE9.3_DataManagementPlan_WP9_OptiMedis_GroeneAdrion_v1.0
Following recommended practice, we will produce a ´readme´ text file including
information on the content of the dataset and instructions for its use. The
Research Data Alliance provides a Metadata Standards Directory that can be
searched for discipline-specific standards and associated tools. 5 For
future iterations of the DMP we will assess whether to adopt any such Meta
Standards to our dataset.
Each research task leading to a literature reviews will create Endnote files
(or interoperable format) - for the selected RCTs for each of the four
conditions separately. Files contain all information necessary to easily find
the original papers, including DOI numbers and PDFs of full text when
available.
For other datasets, in particular the NMA and cost-effectiveness analysis, we
will provide information on the research methodology used, detailed
information on collection methods, sample size, variable definition,
assumptions made, format and file type of the data, explanation of data coding
and analysis performed.
The results from each WP will be published in scientific journals indexed in
the major databases. There will be disclaimers that all underlying data will
be available upon request.
All publications coming from these WP’s (papers, reports) will also be
referred to – and if permitted under the publication license – be published on
the COMPAR-EU website with their DOI and/or ISBN.
We will facilitate search strategies to include our research output by
including relevant keywords. In addition, bibliographic metadata that identify
the deposited publication will be provided in a standard format and will
include the standard disclaimer “This project has received funding from the
European Union’s Horizon 2020 research and innovation programme under grant
agreement No 754936”. In addition, we will add the publication date, and
length of embargo period if applicable, and a persistent identifier.
## Making data openly accessible
A number of project deliverables have been classified as `confidential´ (see
below, page 6 ff., ANNEX 1 (part A) of the Grant Agreement). The deliverables
have been classified as such because they are either intermediate project
outputs whose early publication might interfere with future scientific
publications or they reflect key outputs of the project, which need to be
protected by the research team under their consortium agreement´s IP
regulations.
Nevertheless, COMPAR-EU pursues the policy to make data openly accessible for
noncommercial use. To that extent, we have planned for GOLD Open Access for
all key publications of the project and all scientific publications published
by the COMPAR-EU consortium partners will be available online (open access).
Those publications will also be published on the COMPAR-EU website together
with further research outputs, such as summary datasets, book chapters,
theses, conferences papers, and other unpublished articles.
Following the search terms facilitated under the 2.1. rules, researchers will
be able to identify our research output/publications, access the final data
management plan and therefore assess which dataset was used to produce the
research. In this regard, the data management plan including all its annexes
serve as the standardised communications protocol for COMPAR-EU. It will be
accessible free of charge on the projects web pages.
In order for researchers to request further information on the data structure
or even data itself, we will establish an authentication and authorisation
procedure. This procedure will be included in the data management plan and
also refer to directly on our webpages to clarify the rules and steps under
which data can be requested for research and non-research use. During the
project timeframe, any such request will be discussed at the steering
committee.
To ensure transparency and validity of the research, interested parties may
during the timeframe request insight into the underlying raw data. It is the
assumption that this may be needed in some cases for a journal to validate the
data included in a research article or where questions regarding the integrity
of the data have been raised. During the timeframe of the project, such
queries will be strictly dealt with by the projects´ steering committee.
After termination of the project, one of the project partners will act on
behalf of the steering committee to ensure data requests can be processed for
at least as long as legal requirements warrant storage of the research data.
This partner will also be in charge of maintaining all research data and,
where appropriate and legally required, procedures to access the raw data from
partner organisations. Contact details for the research partner in charge of
long-term data handling will be available via the project website, which has
been reserved for 10 years initially.
Due to the diversity of institutional arrangements and the heterogeneity of
the underlying data sources, some partners have additional arrangements for
data archiving and access:
* NIVEL will make their data open access upon request by using repository van Data Archiving and Networked Services ( www.dans.knaw.nl) .
* IR-HSCSP will make the underlying summarised quantitative and qualitative data available upon request from its agency internal databases. No specific software is needed to access the data.
* Software codes used by UOI will be made available either in the publications or in the program’s or personal webpages.
* A cost-effectiveness model will be developed using analysed secondary data. Source code (most likely in the open source R programming language) for the estimates of effectiveness and costs will be made publicly available by iMTA after registering with the research group so that use can be monitored, and misuse can be prevented, but only for non-commercial use. Evaluation of the accesses will be overseen by the research team and a clear a-priori defined research question will be requested as prerequisite.
## Making data interoperable
As noted above, the data management plan and its future iterations will serve
as the standardised communications protocol for COMPAR-EU. It provides access
to all relevant metadata to make the research outputs – as much as possible –
interoperable with aligned research tasks. This includes standardised
databases and methodological documentation for literature reviews, qualitative
work, network meta-analysis and cost-effectiveness analysis. As part of the
ongoing work in building the data management plan, data dictionaries will be
established to allow researchers within the consortium to ensure efficiency in
the exchange of data. Metadata will be established as described under 2.1.
applying standard terms, templates and processes. These standards will then
ensure interoperability of the data generated by the consortium.
## Increase data re-use
The FAIR criteria are overlapping, therefore some of our plans under re-use
are also related to the criteria findable, accessible and interoperable.
The data management plan serves as standardised communications protocol,
providing a rich description of all data sets and their attributes, providing
procedures for data access requests, and are described to the level of rigor
as expected in the pertinent scientific community.
In general, during the lifespan of the project, exclusive use of all project-
related data is expected by partners of the consortium. However, the steering
group would consider requests for data, for example if another EU project with
similar aims and tasks was funded and an exchange of data was mutually
convenient. All other data requests during the project lifetime will not a
prior be rejected, but discussed by the steering board.
After the active project phase, external partners can request access to data
for noncommercial use. Updated versions of the DMP will propose a process for
such data requests. It is envisaged that COMPAR-EU datasets are made available
to the scientific community after project closure, subject to ongoing use by
the consortium members and the urgency of making data accessible.
The intellectual property arrangements specified in the consortium agreement
-and the decisions taking by the steering group- will guide the timing and
scope of making data available for re-use.
Due to the diversity of institutional arrangements and the heterogeneity of
the underlying data sources, some partners have additional arrangements for
data re-use:
* NIVEL: The data will be licensed via publications in scientific journals indexed in the major databases. At the end of the project, data will be made available for re-use. Data possessed by NIVEL will stay in archives for 10 years. NIVEL works according to an internal quality system (ISO-9001:2015 certified). This quality system concerns NIVEL’s primary task: doing research. Besides that, NIVEL has itself committed to the Dutch behavioural standards for scientific practice following the principles of: honesty, accuracy, reliability, controllability, independence and responsibility.
* IR-HSCSP: The data will be licensed via publications in scientific journals indexed in the major databases, data will be archived for 5 years. As part of the local ethics approval a quality management plan including random checks of consistency and data transcription accuracy is required.
* FAD: Confidential, restricted under conditions set out in Model Grant Agreement. It will be made available as part of publication to protect the intellectual property of participating partners.
* iMTA: data will be made available under a creative common open access agreement, with limitations for commercial re-use (i.e. re-use by commercial entities for-profit reasons). Data will remain available for a period of 5 years.
# Allocation of resources
Resources to ensure efficient translation of the DMP are included in the
research budget of each partner, as the creation of datasets and quality
assurance procedures to ensure accuracy and rigor in dataset design, execution
of analysis and reporting cannot be disentangled from the integral research
process.
However, we have allocated a budget for open-access publications as part of
the Horizon 2020 grant (see grant agreement).
Each COMPAR-EU partner is responsible for the creation, management and
appropriate storage of the datasets they create. Validation and registration
of datasets and metadata is the responsibility of the partner that generates
the data in the WP. These partners will liaise with OptiMedis to ensure that
standard nomenclature and reporting templates are designed in a participatory
manner and implemented efficiently.
The overall responsibility for the future development of this data management
plan rests with OptiMedis AG. When datasets are updated, WP leader that
possesses the data have the responsibility to manage the different versions
and to make sure that the latest version is made available to OptiMedis AG.
OptiMedis AG will align the changes with other datasets and update the DMP as
necessary.
Future iterations will advance definitions for data capture, metadata
production, data quality, storage and backup, data archiving & data sharing.
The costs and potential value of long term preservation will be assessed in
the last year of the project.
# Ethical aspects
As indicated in the project proposal (section 5.1., page 125) the project does
not involve primary human data collection, processing of personal data or
observation of people. The only involvement of patients and other stakeholders
will be participating in the pilot of the IT platform through interviews or
surveys. No physical injury, financial, social or legal harm will be posed to
the participants, and potential psychological risks will not exceed the daily
life standard. See also deliverable D10.5 `Ethical issues report´ for further
information and the ethical approval.
There are no major ethical issues regarding the transfer of sensitive data as
most of the project´s data is extracted from the literature or pertains to
assessment of outcome measures available in databases. Data from individual
interviews, focus groups or consensus groups in which patients and
professionals will take place will not directly be made available publicly.
Anonymised data sets might be provided for future research applications. Raw
data bases including recordings or transcripts which might include person
identifiers (but not names) will be secured in line with institutional,
national and EU legislation and be kept out of reach of non-authorised
persons. This data will only be kept to facilitate an audit of the integrity
of the research process, if so initiated by an authorised party.
# Data protection and security
Patients and other stakeholders who are asked to participate in COMPAR-EU
research project will be completely free to decide whether or not they would
like to take part. Their participation will start after they have signed the
provided informed consent (see annex 1). Prior to deciding to participate,
they will get information in oral and written form (in paper or online,
depending on the method of participation) by the researchers with regards to
the nature, scope and possible consequences of the study, and all potential
participants will have full opportunity for asking questions and withdrawing
without prejudice at any point. Written information (in paper or online) will
be given in the form of patient information leaflets and informed consent
leaflets. The patient information leaflet will provide a full explanation of
the purposes and processed of the study and what participation involves.
Declining to participate will be possible at all stages of the project. The
consent form will be signed and kept in the records of the partners’
institution. Each partner involved in this particular part of the study will
be responsible for adequate storage of informed consents according to good
clinical practice and national regulations. The consent forms together with
any potential temporal patient identifying data will be kept locally, and
disentangling patient identifying data and other data.
In our study, no names of patients or other information that will permit the
identification of persons will be kept after the follow-up period of four
weeks after conclusion of the survey. Information will be coded and
economised, distinguishing between identifying data and nonidentifying data,
so that individual information cannot be traced back to that patient and
furthermore, it will not be possible to enter patient names and other personal
identifiers on the main COMPAR-EU database. Identifying data will not be
stored in the main COMPAR-EU database.
For piloting the IT platform, users are asked to log in with self-chosen
username and a nontrivial password of at least 8 characters.
All data gathered will be transferred using encryption (SSL and/or HTTPS as
technical protocols [HTTPS], [SSL3]).
Updated versions of the DMP will include specific data protection protocols,
which will be prepared for each WP. Anonymisation protocols will be attached
in the annex of updated versions once stakeholders´ participation have taken
place.
Data security is fundamentally the responsible of each partner collecting,
processing and reporting data. All partners declare that they abide to their
institutional, national and EU standards in this regard, in particular in
relation to any secure storage and transfer of sensitive data.
Standards for long term preservation and curation, including the use of
certified repositories, will be discussed by the consortium before closure of
the project.
# Other
Other than the procedures described here, COMPAR-EU does not make use of other
national/funder/sectorial/departmental procedures for data management.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1269_M4F_755039.md
|
# INTRODUCTION
The M4F H2020 project on multiscale modelling for fusion and fission materials
1 is participating voluntarily in the Open Research Data (ORD) pilot
according to the H2020 guidelines on data management 2 and the requirements
specified in Article 29.3 of the H2020 Model Grant Agreement.
The purpose of this document is to provide an analysis of the main elements of
the data management policy that will be used by the M4F project partners with
regard to all the data sets that will be generated by the project.
The M4F data management plan (DMP) is a living document that will evolve
during the lifespan of the project. In this respect, the DMP will be updated
and revised at each reporting period.
# GLOSSARY
CEN European Committee for Standardization
DMP Data Management Plan
DOI Digital Object Identifier
EBSD Electron Back Scatter Diffraction
EERA European Energy Research Alliance
EMMC European Materials Modelling Council
H2020 Horizon 2020
HDF5 Hierarchical Data Format 5
JPNM Joint Programme on Nuclear Materials
M4F Multiscale Modelling for Fusion and Fission Materials
MODA Modelling Metadata
NINA Nanoindentation for Nuclear Applications
NPP Nuclear Power Plant
ODIN Online Data and Information Network
ORD Open Research Data
SEM Scanning Electron Microscopy
TEM Transmission Electron Microscopy
# DATA SUMMARY
## DATA MANAGEMENT
With the M4F consortium undertaking materials testing and modelling tasks,
there is a requirement for services in support of effective collection and
exchange of data. Consequently, a carefully considered data management
strategy has been formulated with a view to enabling exchange of test data
between project partners; benchmarking of modelling data; and re-use beyond
the term of the project. Thus, test data will be deposited in MatDB (a domain-
specific, OpenDOAR-listed 3 repository) in a timely manner; pre-normative
formats for nanoindentation test data to facilitate data transfer and systems
integration will be developed in the scope of an 18-month CEN Workshop; and
data citation will be used to enable discovery and harvesting of data sets. To
augment its data management strategy, M4F is participating voluntarily in the
Horizon 2020 ORD pilot.
As described in the following subsections, while initially the data will be
restricted to project partners, data citation will ensure the data can be
discovered, thereby providing a viable approach to exposing research data of
inherent commercial value without necessarily making the data sets themselves
immediately available. The conditions for data exchange amongst project
partners will be agreed at the M12 project meeting. At project midterm, M4F
project partners will be consulted on the topic of changing the dissemination
level of the data from restricted to Open Access. This approach is consistent
with the ORD 'as open as possible, as closed as necessary' principle, whereby
there is a balance that respects commercial, research, and public interests.
### MatDB
<table>
<tr>
<th>
</th>
<th>
Deliverable nº D7.1: Data Management Plan
</th>
<th>
Page
</th>
<th>
5/15
</th> </tr>
<tr>
<th>
Rev
</th>
<th>
0
</th> </tr>
<tr>
<th>
Date
</th>
<th>
23/02/2018
</th> </tr> </table>
At _https://odin.jrc.ec.europa.eu_ the European Commission JRC hosts an Online
Data and Information Network (ODIN) in support of energy and transport
research. The facility consists of a collection of scientific databases
organized into various categories. In the engineering category, MatDB is a
relational database application that contains over 20.000 test results coming
mainly from European R&D projects on engineering materials and provides a web-
interface for data content, data entry, data retrieval and analysis routines.
The database covers thermo-mechanical and thermo-physical properties data of
engineering alloys at low, elevated and high temperatures for base materials
and joints, including irradiated materials for nuclear fission and fusion
applications, thermal barrier coated materials for gas turbines and properties
of corroded materials. M4F test data will be deposited at MatDB in a timely
manner and data citation will enable discovery and harvesting of data sets.
## DATA SET DESCRIPTION
With a view to benchmarking the results of the M4F modelling activities,
reference database will be created consisting of data sets from approximately
100 nanoindentation tests, including the results of TEM and SEM-EBSD
microstructural analyses 4 . Further, since M4F is devoted to physics-based
multiscale simulation activities, two different activities are coupled
together, namely dedicated experiments (mechanical testing and microstructural
analysis) and computer modelling. Effective data exchange between both
activities is of paramount importance for the purposes of validating the
models. For this the modellers need digital, numerical representations of
microstructures at a length scale that is consistent with that of the model
e.g. atomic modelling codes need details of vacancies, interstitials, etc.
whereas crystal plasticity models need grain size, grain orientation, etc. The
challenges associated with the quantification and storage of microstructural
data were already identified in previous EU-funded projects, including PERFECT
5 (FP6) and PERFORM60 6 (FP7) and are also the subject of ongoing
discussion within the H2020 SOTERIA 7 project, where the main challenge is
the quantification of radiation damage induced features. The numbers that
modellers need are mean size, density and chemical composition of radiation
damage features such as clusters and matrix damage. However, the error
quantification depends on each microstructural technique used e.g. TEM, SANS,
and APT. Further, radiation damage features are not homogeneously distributed
in the microstructure but instead are preferentially located near
dislocations, grain boundaries or other sinks, so spatial distribution is
another of the key issues for quantification of the microstructure. Other
issue is related to the different scales in describing microstructure: grains,
precipitates, vacancies. While the development of a robust materials
information management system to enable the connections at various lengths
scales to be made between experimental data and corresponding multiscale
modelling toolsets 8 is out of the scope of the project in terms of budget
and time, a proposal will be forthcoming for an adequate database structure
and to screen the difficulties associated with the data collection,
verification and storage, especially from microstructural examination and
model parameterization.
### Existing data
Data for non-irradiated samples of materials of interest to M4F obtained
during the EERA JPNM NINA pilot project on nanoindentation for nuclear
applications will be uploaded.
### Mechanical test data
Nanoindentation data coming from the WP6 (nanoindentation) test campaign will
be uploaded by project partners to MatDB. In the absence of a pre-existing
module, a single-cycle nanoindentation module has been implemented according
to the specifications listed in Table 1 through Table 5 and shown in Figure 1,
noting that the field names are consistent with the ISO 14577 standard for
instrumented indentation testing 9 . This is because the fields listed in
Table 1 through Table 5 are largely based on a preliminary examination of the
ISO 14577 standard, namely the entries in Clause 3 – Symbols and designations.
In accordance with accepted software development practices, the
nanoindentation module will be refined over a number of development iterations
based on the findings of further requirements gathering (meaning a close
examination of all normative clauses of the ISO 14577 standard) and feedback
from WP6 partners.
**Table 1.** Specimen dimensions
<table>
<tr>
<th>
**Label**
</th>
<th>
**Units**
</th>
<th>
**Type**
</th>
<th>
**Requirement**
</th> </tr>
<tr>
<td>
Disc diameter, d
</td>
<td>
mm
</td>
<td>
Number
</td>
<td>
Optional
</td> </tr>
<tr>
<td>
Disc thickness, t
</td>
<td>
mm
</td>
<td>
Number
</td>
<td>
Optional
</td> </tr>
<tr>
<td>
Cuboid width, W
</td>
<td>
mm
</td>
<td>
Number
</td>
<td>
Optional
</td> </tr>
<tr>
<td>
Cuboid breadth, B
</td>
<td>
mm
</td>
<td>
Number
</td>
<td>
Optional
</td> </tr>
<tr>
<td>
Cuboid height, h
</td>
<td>
mm
</td>
<td>
Number
</td>
<td>
Optional
</td> </tr> </table>
**Figure 1.** Miniature plain disc and plain cuboid specimens, respectively.
<table>
<tr>
<th>
</th>
<th>
Deliverable nº D7.1: Data Management Plan
</th>
<th>
Page
</th>
<th>
7/15
</th> </tr>
<tr>
<th>
Rev
</th>
<th>
0
</th> </tr>
<tr>
<th>
Date
</th>
<th>
23/02/2018
</th> </tr> </table>
**Table 2.** Test data (general)
<table>
<tr>
<th>
**Label**
</th>
<th>
**Units**
</th>
<th>
**Type**
</th>
<th>
**Requirement**
</th> </tr>
<tr>
<td>
Test result key
</td>
<td>
N/A
</td>
<td>
Number
</td>
<td>
Generated
</td> </tr>
<tr>
<td>
Test/specimen identifier
</td>
<td>
N/A
</td>
<td>
Text
</td>
<td>
Mandatory
</td> </tr>
<tr>
<td>
Test standard
</td>
<td>
N/A
</td>
<td>
Mutable list
</td>
<td>
Optional
</td> </tr>
<tr>
<td>
Quality remark
</td>
<td>
N/A
</td>
<td>
Immutable list
</td>
<td>
Optional
</td> </tr>
<tr>
<td>
Reference to report
</td>
<td>
N/A
</td>
<td>
Text
</td>
<td>
Optional
</td> </tr>
<tr>
<td>
Test machine
</td>
<td>
N/A
</td>
<td>
Text
</td>
<td>
Optional
</td> </tr>
<tr>
<td>
Test date (yyyy-mm-dd)
</td>
<td>
N/A
</td>
<td>
Text
</td>
<td>
Optional
</td> </tr>
<tr>
<td>
Data tested by
</td>
<td>
N/A
</td>
<td>
Text
</td>
<td>
Optional
</td> </tr>
<tr>
<td>
Test remark
</td>
<td>
N/A
</td>
<td>
Text
</td>
<td>
Optional
</td> </tr> </table>
**Table 3.** Test data (procedure)
<table>
<tr>
<th>
**Label**
</th>
<th>
**Units**
</th>
<th>
**Type**
</th>
<th>
**Requirement**
</th> </tr>
<tr>
<td>
Temperature
</td>
<td>
°C
</td>
<td>
Number
</td>
<td>
Mandatory
</td> </tr>
<tr>
<td>
Ambient temperature
</td>
<td>
°C
</td>
<td>
Number
</td>
<td>
Optional
</td> </tr>
<tr>
<td>
Test control
</td>
<td>
N/A
</td>
<td>
Immutable list {Stepped force; Continuous force; Stepped displacement;
Continuous displacement }
</td>
<td>
Mandatory
</td> </tr>
<tr>
<td>
Test control mode
</td>
<td>
N/A
</td>
<td>
Immutable list {linear; quadratic}
</td>
<td>
Optional
</td> </tr>
<tr>
<td>
Loading rate
</td>
<td>
N/s
</td>
<td>
Number
</td>
<td>
Optional
</td> </tr>
<tr>
<td>
Unloading rate
</td>
<td>
N/s
</td>
<td>
Number
</td>
<td>
Optional
</td> </tr>
<tr>
<td>
Displacement rate (loading)
</td>
<td>
mm/s
</td>
<td>
Number
</td>
<td>
Optional
</td> </tr>
<tr>
<td>
Displacement rate (unloading)
</td>
<td>
mm/s
</td>
<td>
Number
</td>
<td>
Optional
</td> </tr>
<tr>
<td>
Indenter shape
</td>
<td>
N/A
</td>
<td>
Immutable list {pyramid with square base; pyramid with triangular base; ball;
cone}
</td>
<td>
Mandatory
</td> </tr>
<tr>
<td>
Indenter material
</td>
<td>
</td>
<td>
Immutable list {diamond; metal}
</td>
<td>
Mandatory
</td> </tr>
<tr>
<td>
Radius of spherical indenter, r
</td>
<td>
mm
</td>
<td>
Number
</td>
<td>
Optional
</td> </tr> </table>
**Table 4.** Test data (test results)
<table>
<tr>
<th>
**Label**
</th>
<th>
**Units**
</th>
<th>
**Type**
</th>
<th>
**Requirement**
</th> </tr>
<tr>
<td>
**Label**
</td>
<td>
**Units**
</td>
<td>
**Type**
</td>
<td>
**Requirement**
</td> </tr>
<tr>
<td>
Maximum test force, F(max)
</td>
<td>
N
</td>
<td>
Number
</td>
<td>
Mandatory
</td> </tr>
<tr>
<td>
Time at F(max), t(hold)
</td>
<td>
s
</td>
<td>
Number
</td>
<td>
Optional
</td> </tr>
<tr>
<td>
Indenter contact depth at F(max), h(c)
</td>
<td>
mm
</td>
<td>
Number
</td>
<td>
Mandatory
</td> </tr>
<tr>
<td>
Projected area at h(c), A(p)
</td>
<td>
mm2
</td>
<td>
Number
</td>
<td>
Optional
</td> </tr>
<tr>
<td>
Maximum indentation depth, h(max)
</td>
<td>
mm
</td>
<td>
Number
</td>
<td>
Optional
</td> </tr>
<tr>
<td>
Permanent indentation depth, h(p)
</td>
<td>
mm
</td>
<td>
Number
</td>
<td>
Optional
</td> </tr>
<tr>
<td>
Tangential indentation depth, h(r)
</td>
<td>
mm
</td>
<td>
Number
</td>
<td>
Optional
</td> </tr>
<tr>
<td>
Calculation method
</td>
<td>
N/A
</td>
<td>
Mutable list {Linear extrapolation; Power law}
</td>
<td>
Optional
</td> </tr>
<tr>
<td>
Indentation hardness, H(IT)
</td>
<td>
GPa
</td>
<td>
Number
</td>
<td>
Optional
</td> </tr>
<tr>
<td>
Test piece indentation modulus, E(IT)
</td>
<td>
GPa
</td>
<td>
Number
</td>
<td>
Optional
</td> </tr>
<tr>
<td>
Plane strain modulus, E*
</td>
<td>
GPa
</td>
<td>
Number
</td>
<td>
Optional
</td> </tr>
<tr>
<td>
Reduced plane strain modulus of the contact, E(r)
</td>
<td>
GPa
</td>
<td>
Number
</td>
<td>
Optional
</td> </tr>
<tr>
<td>
Indentation Creep, C(IT)
</td>
<td>
%
</td>
<td>
Number
</td>
<td>
Optional
</td> </tr>
<tr>
<td>
Indentation relaxation, R(IT)
</td>
<td>
%
</td>
<td>
Number
</td>
<td>
Optional
</td> </tr>
<tr>
<td>
Poisson's ratio, nu(r)
</td>
<td>
None
</td>
<td>
Number
</td>
<td>
Optional 10
</td> </tr>
<tr>
<td>
Elastic reverse deformation work, W(elast)
</td>
<td>
Nm
</td>
<td>
Number
</td>
<td>
Optional
</td> </tr>
<tr>
<td>
Total mechanical work, W(total)
</td>
<td>
Nm
</td>
<td>
Number
</td>
<td>
Optional
</td> </tr>
<tr>
<td>
W(elast)/ W(total), eta(IT)
</td>
<td>
None
</td>
<td>
Number
</td>
<td>
Optional
</td> </tr> </table>
**Table 5.** Test Result - Curve Data
<table>
<tr>
<th>
**Label**
</th>
<th>
**Units**
</th>
<th>
**Type**
</th>
<th>
**Requirement**
</th> </tr>
<tr>
<td>
Time, t
</td>
<td>
s
</td>
<td>
Number
</td>
<td>
Optional
</td> </tr>
<tr>
<td>
Force, F
</td>
<td>
N
</td>
<td>
Number
</td>
<td>
Optional
</td> </tr>
<tr>
<td>
Indentation depth, h
</td>
<td>
mm
</td>
<td>
Number
</td>
<td>
Optional
</td> </tr>
<tr>
<td>
Surface area, A(s)
</td>
<td>
mm2
</td>
<td>
Number
</td>
<td>
Optional
</td> </tr>
<tr>
<td>
Actual temperature
</td>
<td>
°C
</td>
<td>
Number
</td>
<td>
Optional
</td> </tr> </table>
As depicted in Figure 2, individual test results will be accompanied by
metadata (ancillary information) that puts the test result into context. This
metadata includes source information (describing the provenance of the data);
material pedigree data (of which there are two in the case of a dissimilar
weld); specimen data; and test conditions. As shown in the table in the
diagram, any single category (the material category in this case) consists of
subcategories of data, each of which in turn will include many individual
fields.
<table>
<tr>
<th>
</th>
<th>
Deliverable nº D7.1: Data Management Plan
</th>
<th>
Page
</th>
<th>
9/15
</th> </tr>
<tr>
<th>
Rev
</th>
<th>
0
</th> </tr>
<tr>
<th>
Date
</th>
<th>
23/02/2018
</th> </tr> </table>
**Figure 2.** Metadata that accompany a result for a mechanical test.
In the figure, the material category is expanded to show the various
subcategories that constitute material pedigree data. The other categories can
be similarly expanded, so that specimen data for example extend to test piece
configuration, dimensions, surface preparation, coatings, etc.
Although the final WP6 test matrix has yet to be delivered, M4F materials
selection is well advanced. For WP6, the preliminary materials and their
conditions are listed in Table 6.
**Table 6.** Specification of materials and performed irradiations.
<table>
<tr>
<th>
**Material**
</th>
<th>
**Producer (heat)**
</th>
<th>
**Project relation**
</th>
<th>
**Provider in M4F**
</th>
<th>
**Existing neutron irradiation**
</th>
<th>
**Performed ion irradiation/NI**
</th> </tr>
<tr>
<td>
**Common to WP6 and WP2**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Eurofer97
</td>
<td>
Böhler
(E83699)
</td>
<td>
EFDA,
MatISSE
</td>
<td>
SCK·CEN
</td>
<td>
300°C; 0.3–3 dpa
300°C; 0.06/0.6/1 dpa
290 & 450°C; 0.11 dpa
</td>
<td>
</td> </tr>
<tr>
<td>
Fe-9Cr
(ferritic)
</td>
<td>
OCAS
(G385)
</td>
<td>
MatISSE
</td>
<td>
SCK·CEN HZDR
</td>
<td>
290°C; 0.11 dpa & 450°C; 0.10 dpa
</td>
<td>
300°C; 0.1 dpa
200°C; 0.5 dpa
300°C; 0.5 dpa
450°C; 0.5 dpa
</td> </tr>
<tr>
<td>
**Specific to WP6**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Gr. 91
</td>
<td>
Industeel (255518)
</td>
<td>
Demetra, MatISSE
</td>
<td>
SCK·CEN
</td>
<td>
450°C; 0.1 dpa
</td>
<td>
</td> </tr>
<tr>
<td>
Fe
</td>
<td>
OCAS
(G379)
</td>
<td>
MatISSE
</td>
<td>
SCK·CEN HZDR
</td>
<td>
450°C; 0.1 dpa
</td>
<td>
</td> </tr>
<tr>
<td>
**Optional**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
T91
</td>
<td>
Ugine
(36224)
</td>
<td>
SPIRE,
MATTER
</td>
<td>
</td>
<td>
200°C; 2.95/4.36 dpa
[3,9,11]
300°C; 0.06/0.6/1 dpa
</td>
<td>
three-step ion irr., 200°C; 2.5 & 3.5 dpa
</td> </tr>
<tr>
<td>
**Material**
</td>
<td>
**Producer (heat)**
</td>
<td>
**Project relation**
</td>
<td>
**Provider in M4F**
</td>
<td>
**Existing neutron irradiation**
</td>
<td>
**Performed ion irradiation/NI**
</td> </tr>
<tr>
<td>
Fe-9Cr
(martens.)
</td>
<td>
OCAS
(L252)
</td>
<td>
GETMAT, MatISSE
</td>
<td>
SCK·CEN
</td>
<td>
300°C; 0.06/0.6/1 dpa
290 & 450°C; 0.11 dpa
</td>
<td>
200°C; 0.5 dpa
300°C; 0.5 dpa
450°C; 0.5 dpa
\+ GETMAT
</td> </tr>
<tr>
<td>
Pure Fe
</td>
<td>
</td>
<td>
</td>
<td>
SCK·CEN
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Pure W
</td>
<td>
</td>
<td>
</td>
<td>
SCK·CEN
</td>
<td>
</td>
<td>
</td> </tr> </table>
For each of the materials listed in Table 6 and as per the example shown in
the Figure 3 screen capture for the Gr. 91 material, a MatDB record exists
based on the corresponding material certificate (for commercial steels) and
material reports for model alloys.
**Figure 3.**
MatDB Gr.91 material record.
### Microstructural data
With MatDB already providing support for microstructural data, the possibility
of a generic template for capturing microstructural characterisation data will
be explored within M4F. Some suggestions can be collected from previous
projects. Further, the analysis and storage of microstructural data was
one of the main activities of the FP7 ICMEG project 11 that ended in 2016
and that now continues in the frame of the European Materials Modelling
Council (EMMC) 12 , which is itself an H2020 CSA project. One outcome of
ICMEG was the identification of an hierarchical structure comprising both a
spatially resolved and simultaneously also a statistical description of data.
Especially for microstructures, a spatially resolved description was
identified as a mandatory basis for a sound physics description of phenomena.
Schmitz et al 13 provide a comprehensive system of metadata descriptors for
the description of a 3D microstructure that may serve as a first basis for
standardization and will simplify the data exchange between different
numerical models, as well as promote the integration of experimental data into
numerical models of microstructures. In particular, HDF5-formatted files are
proposed 14 to allow for both a spatially fully resolved description and for
a statistical representation of the data. With MatDB already supporting
rudimentary features for storing microstructural characterisation data from
analyses performed prior to and after testing, an opportunity exists during
M4F project execution to examine of the ICMEG data exchange format in the
context of an existing solution.
### Modelling data
Although MatDB does not presently support a module explicitly for modelling
data, where modelling of a specific test is undertaken there is the
possibility to associate the modelling results with the corresponding test
result by way of attached document(s). Usefully, a template and vocabulary for
materials modelling has been delivered by the recently completed MODA CEN
Workshop (CEN/WS MODA) 15 , which itself builds on the results of prior EU
funded materials modelling projects 16 . Use of the MODA template for
capturing M4F modelling data will be the subject of investigation for the
duration of the project.
# FAIR DATA
## MAKING DATA FINDABLE, INCLUDING PROVISIONS FOR METADATA
<table>
<tr>
<th>
</th>
<th>
Deliverable nº D7.1: Data Management Plan
</th>
<th>
Page
</th>
<th>
12/15
</th> </tr>
<tr>
<th>
Rev
</th>
<th>
0
</th> </tr>
<tr>
<th>
Date
</th>
<th>
23/02/2018
</th> </tr> </table>
All the experimental data coming from M4F mechanical testing will be enabled
for citation with DataCite DOIs, for which the naming convention will rely on
the corresponding database technical keys and the bibliographic metadata will
be compliant with the most recent version (v4.1) of the DataCite Metadata
Schema 17 . Thus, the title and abstract accompanying each data set will
ensure the data are findable. Further, with the bibliographic metadata also
including the DOI version number, specific versions of each data set can be
identified. As shown in the Figure 4 screen capture, for ease of referencing
of large numbers of data sets created in the scope of M4F, the MatDB data
citation service also supports DOI catalogs, whereby a group of data sets can
be allocated a DOI, thereby allowing a specific group of data sets i.e. a data
catalog, to be referenced from a single citation.
**Figure 4.** MatDB DOI catalog.
As depicted in Figure 2, an ad hoc data model exists that defines the metadata
that accompany each individual test result. With a view to promoting re use, a
subset of the metadata are mandatory.
## MAKING DATA OPENLY ACCESSIBLE
The H2020 ORD pilot follows the principle 'as open as possible, as closed as
necessary' and focuses on encouraging sound data management as an essential
part of research best practice. M4F data management is entirely consistent
with this principle.
Scientific data typically have a significant financial and intellectual value.
Consequently, there is (1) a need to ensure the data are properly understood
in order to insure appropriate reuse e.g. for analysis and design, and (2) an
opportunity to promote data exchange by way of bilateral arrangements. With a
view to encouraging appropriate reuse and to promoting data exchange, M4F data
access policy is formulated accordingly.
All relevant M4F WP6 test data will be deposited at and made available from
the European Commission materials database, MatDB. As shown in the Figure 5
screen capture, data retrieval is an intuitive process that requires no
additional software or documentation and relies on a preliminary selection
(based on test type, source, and material) and optional advanced selection
(typically based on test conditions and parameters). Having made the
selection, the corresponding data sets can be browsed online and/or downloaded
manually in PDF, MS Excel, and/or XML formats.
**Figure 5.** MatDB Open Access data retrieval interface.
All M4F data sets will be citable, meaning that there will be bibliographic
metadata in the public domain. This serves two immediate purposes, namely to
promote discovery and to insure data owners are acknowledged in derivative
works (and hence have an incentive to share their data). Perhaps more
importantly, data citation respects predefined data access levels, meaning
that Open Access data remain open, while restricted data remain restricted. In
which case, there is the possibility for third parties to request access to
restricted data sets, so that the data owners have the opportunity to release
restricted data under terms that meet their interests, whether these are
intellectual or commercial. In this respect, the M4F data sharing policy will
extend to the consortium (1) considering all data access requests with a view
to making data sets available in the circumstance that it would add value to
the overall objectives of the project and related activities and (2) reviewing
the Open Access option at midterm.
Where citable data sets are restricted and data access requests are submitted,
these requests will be forwarded to the technical co-ordinator for their
consideration. In the circumstance that M4F data sets are made Open Access,
MatDB allows direct access to the data according to the terms of the EU
Copyright Notice 18 . In all circumstances, where data are cited in
derivative works, the individuals and organizations that have accessed the
data will be known.
## MAKING DATA INTEROPERABLE
For managing the data, M4F will utilize established metadata and harvesting
standards, including the
DataCite Metadata Schema. As already indicated, each of the data sets will be
assigned a DataCite DOI, so that all data sets can be cited in derivative
works. The result will be that for every data set, bibliographic metadata
corresponding to the mandatory fields of the DataCite metadata schema v3.0
will be openly accessible. All records will be available from the DataCite
metadata server. Since
DataCite supports both Dublin Core and OAI PMH, these metadata data can be
searched and harvested programmatically.
As evidenced by the lack of engineering science data formats registered at the
RDA metadata standards repository 19 , formats in support of
interoperability in the engineering sector are almost non-existent. To address
this shortcoming, M4F will contribute to a series of CEN Workshops on formats
for engineering materials data 20 by organizing an 18-month CEN Workshop to
develop prenormative standards for nanoindentation test data. In the scope of
this new CEN Workshop, data models and formats will be derived from the ISO
14577 standard for instrumented indentation testing 9 in accordance with the
methodology established during the prior CEN Workshops.
## INCREASING DATA RE-USE
The data license is specified by way of the DataCite rights property. For Open
Access data sets, the default license is the EU Copyright Notice 18 for JRC
data sets and CC-BY for non-JRC data sets, while for restricted access data
sets the license is by arrangement with the creator organization.
As already indicated, at the outset of the project the data will be restricted
and any requirements for an embargo period will be given consideration at the
time access requests are received.
It is anticipated that the data will remain re-usable beyond the term of the
project. With re-usability largely dependent on the quality and completeness
of the data sets, the existing MatDB quality assurance procedures will be
relied upon to promote re-use over the longer term. These procedures extend to
the requirement that tests are performed according to internationally accepted
standards and protocols; to mandatory metadata requirements; and to validation
of individual data sets by a subject matter expert prior to their becoming
available from the MatDB data retrieval module.
Beyond the reliance on MatDB quality assurance procedures, at its M12 project
meeting the M4F consortium will give consideration to the appointment of an
expert panel responsible for data quality, including setting project-specific
mandatory data requirements and reviewing individual data sets.
# ALLOCATION OF RESOURCES
M4F will take advantage of existing JRC services for managing engineering
materials data and thus there are no costs for repository services. In case of
modifications, such as new metadata fields, the work is anticipated in the M4F
grant agreement (GA).
As the service provider, JRC will host webinars to demonstrate the data entry
procedure so that all partners can work effectively. Beyond the term of the
project, the JRC will continue to host the data sets. In all circumstances,
the JRC simply provides a hosting service, so that ownership of the data
always remains with the creator organization.
Individual project partners are responsible for uploading their test results
to MatDB and the work is planned in WP6 as an integral component of the test
campaign.
# DATA SECURITY
The MatDB database is the subject of continual development and maintenance and
is backed-up daily.
# ETHICAL ASPECTS
Possibly the release of the experimental data to states that are subject to
the embargo of technical data on nuclear power plant (NPP) construction may be
relevant. However, given that the data will be restricted at the outset of the
project, any ethical issues will be taken into consideration at the time
access requests are received. These same considerations will be taken into
account when the Open Access option is reviewed at midterm.
Other than the names of data creators and project leads, there is no personal
data and hence no requirement for informed consent.
# OTHER ISSUES
The JRC abides by the principles of the JRC Data Policy 21 , which although
not extending to organizations other than the JRC, does ensure best working
practices when managing data coming from sources other than the JRC.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1283_5G ESSENCE_761592.md
|
# 1 Introduction
## 1.1 Preamble
The _**5G ESSENCE Project** _ 1 ( _**Grant Agreement (GA) No.761592** _ )
_-hereinafter mentioned as the “Project”-_ is an active part of the 5G-PPP –
phase 2 initiative 2 .
5G ESSENCE addresses the paradigms of Edge Cloud computing and Small Cell as-
a-Service (SCaaS) by fuelling the drivers and removing the barriers in the
Small Cell (SC) market, forecasted to grow at an impressive pace up to 2020
and beyond, and to play a “key role” in the 5G ecosystem 3 , 4 .
In fact, 5G ESSENCE provides a highly flexible and scalable platform, able to
support new business models and revenue streams by creating a neutral host
market and reducing operational costs, by providing new opportunities for
ownership, deployment, operation and amortisation.
The technical approach exploits the benefits of the centralisation of Small
Cell functions as scale grows through an edge cloud environment based on a
two-tier architecture, that is: a first distributed tier for providing low
latency services and a second centralised tier for providing high processing
power for computing-intensive network applications. This allows decoupling the
control and user planes of the Radio Access Network (RAN) and achieving the
benefits of Cloud-RAN (C-RAN) without the enormous fronthaul latency
restrictions 5 . The use of end-to-end (E2E) network slicing mechanisms will
allow sharing the 5G ESSENCE infrastructure among multiple operators/vertical
industries and customising its capabilities on a _per-tenant_ basis. The
versatility of the architecture is enhanced by high-performance virtualization
techniques for data isolation, latency reduction and resource efficiency, and
by orchestrating lightweight virtual resources enabling efficient Virtualized
Network Function (VNF) placement and live migration.
Among the fundamental 5G ESSENCE objectives are: Full specification of
critical architectural enhancements; definition of the baseline system
architecture and interfaces for the provisioning of a cloud-integrated
multitenant SC network and a programmable radio resources management (RRM)
controller; development of the centralized software-defined radio access
network (SD-RAN) controller to program the radio resources usage in a unified
way for all CESCs (Cloud-Enabled Small Cells); exploitation of high-
performance and efficient virtualization techniques for better resource
utilization, higher throughput and less delay at the network service creation
time; development of orchestrator’s enhancements for the distributed service
management; demonstration and evaluation of the cloud-integrated multi-tenant
SC network; conduct of a market analysis and establishment of new business
models, and finally, maximization of impact to the realization of the 5G
vision 6 .
The 5G ESSENCE Project will accommodate a range of use cases, in terms of
reduced latency, increased network resilience, and less service creation time.
One of its major innovations is the provision of E2E network and cloud
infrastructure slices over the same physical infrastructure, so that to
fulfill vertical-specific requirements as well as mobile broadband services,
_in parallel_ .
5G ESSENCE leverages knowledge, SW modules and prototypes from various 5G-PPP
Phase-1 projects, SESAME 7 being particularly relevant.
Building upon these foundations, very ambitious objectives are targeted,
culminating with the prototyping and demonstration of 5G ESSENCE system in
three real-life use cases associated to vertical industries, that is: edge
network acceleration in a crowded event; mission critical applications for
public safety (PS) communications providers, _and_ ; in-flight entertainment
and connectivity (IFEC) communications.
## 1.2 Framework for Data Management
An old tradition and a new technology have been “converged” to realize an
exceptional public good. The “ _old tradition_ ” is the willingness of
scientists and scholars to publish -or to make known- the results of their
research in scholarly journals without payment, for the sake of inquiry and
knowledge and for the promotion of innovation. The “ _new technology_ ” is the
Internet that has radically modified our lives in the way we work, we study,
we amuse or we perceive the modern digital world 8 . The Internet has
fundamentally changed the practical and economic realities of distributing
scientific knowledge and cultural heritage. For the first time ever, the
Internet now offers the chance to constitute a global and interactive
representation of human knowledge, including cultural heritage and the
guarantee of worldwide access 9 , 10 . The challenge becomes greater in
the framework of the forthcoming 5G applications and facilities 11 .
The “public good” they can so make possible is the world-wide electronic
distribution of the peer- _reviewed_ journal literature, together with a
“completely free” and/or unrestricted access to it by all scientists,
scholars, teachers, students, and other curious minds. Removing access
barriers to this literature will accelerate research, enrich education, share
the learning of the rich with the poor and the poor with the rich, make this
literature _as useful as it can be_ , and lay the foundation for uniting
humanity in a common intellectual conversation and quest for knowledge.
According to the provisions of the 5G ESSENCE Grant Agreement (GA) 12 , all
involved partners “ _must implement the Project effort as described in the
respective Annex 1 and in compliance with the provisions of the GA and all
legal obligations under applicable EU, international and national law”._
Effective research data management is an important and valuable component of
the responsible conduct of research. This document provides a data management
plan (DMP), which describes how data will be collected, organised, managed,
stored, secured, backuped, preserved, and where applicable, shared.
The scope of the present DMP is to make the 5G ESSENCE data easily
discoverable, accessible, assessable and intelligible, useable beyond the
original purpose for which it was collected as well as interoperable to
specific quality standards.
7
H2020 5GPPP SESAME (Grant Agreement No.671596) project: S _mall Cells
Coordination for Multi-tenancy and Edge Services_ . For more details see :
__http://www.sesame-h2020-5g-ppp.eu/_ _
8
See, for example, the framework discussed within:
European Commission (2012, June): Communication on “ _A European strategy for
Key-Enabling Technologies- A bridge to growth and jobs_ ” (COM(2012) 341
final, 26.06.2012). Brussels, European Commission, 2012.
9
See the context proposed in:
Barfield, C.E., Heiduk, G.S., and Welfens, P.J.J. (2003): _Internet, Economic
Growth and Globalization_ . Springer.
10
Tselentis, G., Domingue, L., Galis, A., Gavras, A., et _al._ (2009): _Towards
the Future Internet – A European Research Perspective_ . IOS Press (2009). 11
See, _among others_ , the context discussed in: Chiang, M., Balasubramanian,
B., and Bonomi, F. (2017, June): _Fog for 5G and IoT_ . Wiley. 12
As predicted in _Article 7.1 (“General obligation to properly implement the
Action”)_ of the GA.
# 2 Project Management Structure and Procedures
Being a **large contribution Project** of 30 calendar months duration,
actually comprising of **21 partners** 7 , and of complexity comparable to
traditional large IP projects, the 5G ESSENCE management structure has been
carefully designed and established, based on the coordinator’s and partners’
experience in running large ECfunded projects; this comprises of a
**comprehensive and lightweight management structure.**
The main goal of the management structure, as shown in _**Figure 1** _
(below), is to ensure that the Project will reach its objectives, in the
scheduled time, and making use of the budgeted resources, while explicitly
complying with the Commission’s regulation and applied procedures.
The well-defined project management (PM) structure ensures a proper level of
co-ordination and cooperation amongst the consortium members. Additionally,
project management implies, _inter-alia_ , for the following responsibilities:
Project administration, project organization, management of the technical
progress of the project according to plans, co-ordination with the other EC
projects in the 5G-PPP 8 and other interested parties. The Project
Coordinator ( _OTE_ ) has already previous experience in managing large
European projects that fully qualifies it to lead such an initiative. An
intensive horizontal (between WPs) and vertical (between project management
and partners) communication and collaboration has been put in place, for the
proper and within due time execution of all related actions.
The 5G ESSENCE- _based_ management activities comprise administrative and
technical issues, including the legal framework and the organizational
structure of the complete Project’s framework. Furthermore, a roadmap of
meetings and workshops and related activities as well as quality assurance
procedures and steering tools are described. The goal of the project
management activities is also to “identify and address” potential issues,
risks or conflicts emerging across partners, and manage the intellectual
property related to both prior knowledge as well as Project achievements.
The 5G ESSENCE partners have significant experience with collaborative
projects and have been -or are- already working together with other consortia.
All partners have a long-term strategic interest in the field, and most of
them have contributed significantly to the R&D topics at the core of the
5G-PPP vision in previous/running projects. Main criteria for the selection of
each partners’ role were excellence in the field, reliability, experience and
commitment, as discussed in more details in the context of the Project’s GA.
5G ESSENCE consists of eight (-8-) distinct Work Packages, as described in
_Section 3.1_ of the corresponding
DoW. A visual representation of the interdependencies between the work
packages is given in the Gantt and Pert diagrams, as both appear in _Section
3.1.1_ and in _Section 3.1.4_ of the DoA, _correspondingly_ . The advanced
research parts in the Project will be managed by using an agile management,
based on decision points and concrete milestones.
The Project organisational structure and decision-making mechanisms has also
been formalised in the Consortium Agreement (CA) that has been signed before
the official start of the Project. The management levels and their
interdependencies are described in the following sections.
In the rest of this Section, we explicitly describe the governance part that
identifies the key roles and bodies, the management process, knowledge and
innovation management, including the risk assessments.
**Figure 1:** 5G ESSENCE Management Structure
## 2.1 Management bodies and Organization
The management bodies employed in 5G ESSENCE comprise persons, committees and
other entities that are responsible for making management decisions,
implementing management actions, and their interrelation. The management
bodies are illustrated in _**Figure 1** _ and include:
* PM - Project Manager (Dr. Ioannis Chochliouros, _OTE_ , for administrative management);
* TM - Technical and Scientific Manager (Dr. Anastasios Kourtis, _NCSRD_ , for technical management);
* IA - Innovation Architect (Mrs. Maria Belesioti 9 , _OTE_ , Knowledge, for innovation & exploitation management);
* KIM Team (Knowledge and Innovation Management Team);
* SM - Standardization Manager (Dr. Felipe Huici, _NEC_ , for standardisation and exploitation management);
* Diss&Comm (Dissemination and Communication) Leader (Mrs. Maria-Rita Spada, _WI3_ , for dissemination and communication management);
* GA - General Assembly (one representative per partner, administrative management);
* PB - Project Board, executive committee acting as decision-implementation body;
* AB - Advisory Board (chaired by PM, for International visibility beyond Europe); - WPLs - Work Packages Leaders, and; - TLs - Task Leaders.
Their detailed role and duties are described in the next subsections.
#### (i) Project Manager (PM)
The Project Manager (PM) for 5G ESSENCE is Dr. Ioannis Chochliouros, who is a
senior manager and department head at OTE. Dr. Chochliouros is leader of _OTE
Research Programs_ within the _Fixed & Mobile Technology Strategy and Core
Network Division, _ within _OTE_ , since 2005. Dr. Chochliouros who is also
exercising the role of the Project Coordinator (PC) has substantial and proven
experience in the coordination of both scientific and RTD projects involving
many partners and complex research goals and has been involved in decision-
making positions in at least 45 (European, national and international)
research projects. He is also Project Manager of the ongoing 5G-PPP Phase-1
SESAME project. The main role of the PM is the charge of the overall
administrative management of the Project, being the single point of contact
with the EC. The PM is responsible for the following tasks _(amongst others
tasks as explicitly defined by the EC Grant Agreement and the partner’s
Consortium Agreement)_ : (i) Monitor Project’s progress on a daily basis, for
continuous rating of achievements, objectives, tasks, WPs with global view of
the overall Project, ensuring a smooth running of activities and collaboration
among all partners, identifying problems and consequences for future research;
(ii) Provide the Project Management Plan which describes the project
management structure, procedures for communication, documentation, payments
and cost statements, procedures to control Project progress and risk
management; (iii) Quality procedures and quality assurance (QA); (iv)
Coordination between the EC and the consortium, communicating all information
in connection with the Project to the EC; (v) Document transmission to the EC,
including all contractual documents and reports related to the administrative,
financial, scientific, and technical progress of the Project; (vi) Coordinate
and manage the Project’s Advisory Board together with the TM; (vii)
Participate in the 5G-PPP programme-level Steering Board (SB) as recommended
by the 5G-PPP program. In summary, the PC is the legal, contractual, financial
and administrative manager of the Project.
#### (ii) Technical and Scientific Manager (TM)
The Technical and Scientific Manager (TM) for 5G ESSENCE is Dr. Anastasios
Kourtis, Research Director at _NCSRD_ . He has more than 30 years of
experience in managing and successfully executing research and industrial
projects, in particular, at _NCSRD_ , he has been an active player from the
start of the EC framework programs and most recently within FP7, where he is
currently PM of T-NOVA 10 (FP7 ICT) and TM for VITAL 11 ( _H2020_ ICT),
CloudSat 12 (ESA) projects. He is also Technical Manager of the ongoing
5G-PPP Phase-1 SESAME project. He has a strong background on wireless and
wired broadband network infrastructures, multimedia applications, Quality of
Service (QoS), network management (NM) and network virtualization. The TM is
in charge of the overall scientific and technical management and progress of
the Project. He is responsible for the correct execution of the technical
activities of the contract, as described in the respective GA. His tasks
comprise, _in particular_ , ensuring timely release, technical high quality
and accuracy of technical deliverables. The TM is the “promoter” of the
technical achievement of the Project, in association with the PM and the
Diss&Comm Manager (i.e., the WP8 Leader), to ensure appropriate Project
visibility. He works in close cooperation with the WP Leaders and will receive
the support of the PM. The TM will also participate in the programme-level
Technology Board (TB) established by the 5G-PPP), towards technical planning
of joint activities and monitoring the progress against the technical KPIs.
#### (iii) Innovation Architect (IA)
The 5G ESSENCE project has appointed a dedicated Innovation Architect (IA),
who will chair the _Knowledge and Innovation Management (KIM)_ _Team_
activities in the Project, together with the Standardisation Manager and the
Technical Manager. The role of the Innovation Architect is to study and
analyse both market and technical aspects, and “bridge the “Project research
achievements to a successful implementation and deployment in the real world.
The Innovation Architect for 5G ESSENCE is currently Mrs. Maria Belesioti 13
, expert from OTE’s R&D. She brings several years of market and mobile-
industry experience and background, and has a successful track in productising
research and innovation activities and patents, and has the experience and
capabilities to recognise (and foster) “ _how advanced scientific results can
be transformed into products and market opportunities_ ”. Indeed, the
Innovation Architect will assist and advise the Project in best responding to
emerging market opportunities. In turns, by thoroughly following the evolution
of the sector, the new emerging technologies and the products from 5G ESSENCE,
the IA will help bringing all this inside the Project, utilising his position
as chair of the Knowledge and Innovation Management (KIM) activities in the
project.
#### (iv) Innovation Architect (IA)
**Knowledge and Innovation Management (KIM) Team:** The KIM team which is
composed by PB members and WPLs and chaired by the IA, will assure an
effective innovation management developing and constantly updating both a
market analysis and a business plan for the results achieved by 5G ESSENCE,
monitoring also IPR issues as regulated in the CA. For handling patents, the
consortium will also apply proven methods used in previous Community projects.
#### (v) Standardisation Manager (SM)
The 5G ESSENCE Project has appointed a dedicated Standardization Manager (SM),
who will coordinate the standardisation activities in the Project. The Project
has appointed Dr. Felipe Huici, from _NEC_ , to undertake the corresponding SM
role. The main activity of the SM is to monitor and plan the standardisation
strategy together with the Innovation Architect and the Technical and
Scientific Manager, and to periodically “monitor and assess” the
standardisation potential of the scientific results from the Project. Dr.
Huici brings several years of experience in Standardisation within _NEC_ and
has both the knowledge and the ability to quickly “identify” opportunities for
standardisation and to spot the right Standards Developing Organisation (SDO)
for specific 5G ESSENCE innovations. The SM will periodically report to the
KIM team about the progress of standardisation and open-source development
activities within 5G ESSENCE, which will then be reported to the EC and
further, presented to the 5G-PPP WG _on Standardization_ in order to create
joint opportunities, with the aim of creating joint opportunities for
targeting specific SDO’s which need collective strategy from the 5G-PPP board,
in order to “push” European interests globally.
#### (vi) Dissemination & Communication Leader (DissComm Leader)
The 5G ESSENCE Project has appointed a Dissemination & Communication Leader to
coordinate the promotional activities and dissemination of the Project. This
role will be handled by Dr. Maria Rita Spada from _WI3_ , who is also the WP8
Leader. The _Diss &Comm Leader _ will be in charge of all the dissemination-
_related_ priorities in 5G ESSENCE, and she will also pursue the strategy to
have optimum visibility within the 5G-PPP initiative, and beyond, to secure a
wide dissemination and awareness of 5G ESSENCE. The Diss&Comm Leader will work
closely with the WP8 task leaders, and the PB in order to regularly update and
inform about the Diss&Comm activities and will also execute the planned
Diss&Comm strategy in a coherent manner together with the PB members.
#### (vii) General Assembly (GA)
The General Assembly (GA) is the decision-making body of the Project, chaired
by the PM and composed by one representative per partner (each having one
vote), allowing for the participation of each partner in the collective
decisions of the Project. The GA is responsible for the strategic orientation
of the Project, that is: overall direction of all activities, reorientation
whenever necessary, budget revision and measures taken to manage defaulting
partners. To ensure the Project is advancing in time and quality with the
work-plan, and is adapting as necessary to external changes, the GA will
analyse performance indicators and all other relevant information provided by
the Project Board and take into account the evolution of the context in which
the Project is carried out, notably scientific, legal, societal, and economic
aspects, etc. The GA meets at least three times per year, unless intermediate
meetings are in the Project’s interest. In this case, GA meetings are held by
decision of the PM or by the request of at least 50% of its members. In
between meetings, the GA can take decisions by electronic means. The GA is the
ultimate decision-making body and tries to reach consensus whenever possible;
in the opposite case, the GA makes decisions upon simple majority with a
deciding vote for the PM representative, _in case of a tie_ .
#### (viii) Project Board (PB)
The Project Board (PB), composed by a reduced number of members, will
facilitate the management and monitoring of the Project. It is made up of the
WP Leaders, and will be chaired by the PM with the assistance of the TM, who
will be deputing the PM. Compared to the GA, the PB is “more focused” on the
operational management and can have more regular meetings, _when necessary_ .
It also prepares the decisions to be taken by the GA, ensures that these
decisions are properly implemented, and surveys ethical issues. The PB is also
in charge of the financial management of the WPs. It is also the
responsibility of the PB, as well as of the WPLs, to identify and assess risks
and provide contingency plans. The PB is composed of the following people 14
, each of them having both scientific excellence and strong experience in
large collaborative research and development projects; Dr. Ioannis
Chochliouros ( _OTE_ , PM, PB Chair, WP1 Leader), Dr. Anastasios Kourtis (
_NCSRD_ , WP2 Leader), Mrs. Maria Belesioti 15 ( _OTE_ , WP3 Leader), Dr.
Felipe Huici ( _NEC_ , WP4 Leader), Evangelos Sfakianakis ( _OTE_ , WP5
Leader), Dr. Mathieu Bouet ( _TCS_ , WP6 Leader), Dr. Tinku Rasheed ( _ZII_ ,
WP7 Leader), and Dr. MariaRita Spada ( _WI3_ , WP8 Leader).
The PB also defines the communication strategy to update partners about the
Project status, the planning and all other issues that are important to them,
to give maximum transparency to all involved partners and to increase the
synergy of the intended cooperation. Interactive management meetings and
technical meetings have an important role in the framework of the
communication strategy. All information -such as minutes of meetings, task
reports and relevant publications- will be communicated to the PM. It is the
strategy of the consortium to guarantee a fast and complete flow of
information. All partners have the means to communicate by using electronic
mail. The PB has bi-weekly meetings (with extra meetings held based on
purpose), either by conference call or during Project’s face-to-face Plenary
Meetings. The PB makes decisions upon simple majority with a deciding vote for
the PM representative, _in case of a tie_ .
#### (ix) Advisory Board (AB)
The 5G ESSENCE consortium will appoint an Advisory Board (AB) in order to
monitor the 5G ESSENCE- _related_ developments world-wide and ensure
visibility of the Project beyond Europe. The consortium plans to invite a
maximum of 3-5 members to the AB, which is to be chaired by the PM. The PM and
the PB will periodically organise remote conferences with the AB members to
update the Project activities and will gather information through semesterial
inputs. The AB members will be invited to annual workshops of 5G ESSENCE and,
_further,_ they will be invited to participate to the final Project demos.
While preparing the proposal, the 5G ESSENCE consortium has already received
promising inputs (a few letters of support are already updated in the Annex,
Section A2, for the DoA). The AB is composed of the following members: _AT &T
_ (Dr. Steven Wright); _Nokia_ (Mr. Dirk Lindemeier); _Vodafone_ (Dr. Guenter
Klas). There has been an updated list of potential candidates in the recent
Project’s Face-to-Face (F2F) Plenary Meeting and General Assembly Meeting.
More stakeholders will be incorporated if the consortium desires to further
strengthen its visibility.
##### (x) Work Package Leaders (WPLs)
Each work package is led by the WP Leader (WPL), who is responsible for making
the day-to-day technical and management decisions that solely affect that WP.
The WP leaders’ responsibilities include: (i) Leading and coordinating the
task activities involved in the WP through the Task Leaders; (ii) Initial
quality checking of the WP work and deliverables; (iii) Handling
resource/skills balance within the WP subject to agreement of the PB to
changes; (iv) Participating in the PB meetings; (v) Highlighting to the PB of
potential threats to the technical success of the Project, _and_ ; (vi)
Reporting progress to the PB and raise amendments, issues and red flags to the
TM if needed.
#### (xi) Task Leaders (TLs)
Each Task is led by the Task Leader (TL), who is responsible for the
activities performed in his/her task, coordinating the technical work, and
making the day-to-day technical decisions that solely affect his/her Task. TLs
should report (internally) to the WPL at least once a month on the progress of
their task.
## 2.2 Management Procedures
Technical and operative decisions will be taken as far as possible informally,
and through achieving consensus. The various procedures are designed to ensure
that the Project runs smoothly, by ensuring that the goals are clearly defined
and understood, the WPs represent a sensible division of the work and comprise
the necessary expertise to fulfil the objectives, responsibilities are clearly
assigned, and there are transparent lines of communication among the
participants. A Consortium Agreement (CA) provides explicitly the rules and
terms of reference for any issue of legal nature concerning the co-operation
among the parties as well as the Intellectual Property Rights (IPR) of
individual partners and the consortium “ _as a whole_ ”.
For administrative, technical or operative decisions for which no consensus
can be reached, the Project will rely on the Project Board.
For decisions regarding budget redistribution, consortium composition or major
decisions on the workplan the Project Board is the highest decision making
body in the Project. Any project management decision, either technical or
administrative, taken by the Project Board is mandatory for all project
members, and may not be overruled within the Project.
### 2.2.1 Reporting to the EC
5G ESSENCE follows the procedures presented in the Project guide 16 to
ensure on-time, transparent and highquality reporting to the EC. Project
reporting as well as internal intermediary reporting follows a planning
approach with several verifications. This method allows delivery of high-
quality reports, providing very accurate insight into the status of the
Project. The following reporting will be done: (i) Periodic reports will be
provided to the EC (M12+2, M24+2, M30+2); (ii) In between the periodic reports
there will be internal semestrial reports for the PM to “keep track” of the
Project performance. The periodic report is mandatory in all European
projects.
Deliverables and milestones follow a procedure with fixed regular reminders,
peer review by two (-2-) partners not involved in the specific reporting,
checking by the relevant WPL, followed by final validation by the PM and the
PB. This procedure results in on-time, high-quality deliverables and
milestones.
Periodic Progress Reports (PPRs) will be collated with the reporting periods,
prior to each project review and submitted and sent to the Project Officer by
the PM. These reports detail the work performed by the partners, the
achievements, collaborations, resources spent/planned, and future plans and,
together with the Financial Statements, will serve as the main Project
Management documentation.
_**Decision making:** _ The GA provides a forum for discussing management
issues and major technical issues. Decisions of the GA are binding for the
entire Project. All reports, such as the periodic reports, any management
reports and the deliverables will be discussed and approved before sending
them to the EC. Procedures for making decisions at a managerial level, to be
taken by the GA, are detailed in the Consortium Agreement.
Day-to-day decisions at the technical level are to be taken by the
corresponding WP Leader(s) where needed, after consultation with the PM. The
Project Board meetings, which will involve the PM and the principal partners
will _-if necessary_ \- decide on major issues by a majority vote with the PM
having the casting vote. All decisions will be taken unanimously, if feasible.
If the members cannot come to an agreement, a voting procedure _-as detailed
in the CA-_ will take place. It is envisaged that full majority would be
necessary to achieve a decision. The consortium has planned to physically meet
for face-to-face (F2F) meetings at least 3 times a year, where most of the
technical meetings (including GA meeting, Joint WP meetings, KIM team
meetings, etc.) will be co-located over a period of 2-3 days, at the premises
of the project partners (chosen under the principle of giving equal
opportunity to each partner to host meetings).
### 2.2.2 Progress Monitoring and Quality Assurance
In order to guarantee an optimal allocation of resources to the Project
activities, tasks as well as responsibilities and partner involvement have
been well defined. The management procedures for monitoring progress and
responding to changes have been documented in the Quality Assurance Plan
(i.e., the deliverable D1.2, submitted in M2) and executed regularly. This
constitutes a continuous “cyclic” monitoring process to be implemented in the
course of the full Project. Each cycle time will be of six calendar months.
The PM is ultimately responsible for the quality control (QC) of the
deliverables to the EC, coordinating closely on technical quality checks with
the TM. Consequently, the PM can request remedial action and additional
reports, should any doubt regarding progress, timescales or quality of work
make this necessary. Every contractual deliverable, prior to its submission to
the EC, will be the subject of a peer review by persons not directly involved
in either the subject matter or the creation of that deliverable. Where
necessary, the PM could request further work of the partners on a deliverable,
to ensure that it complies with the project’s contractual requirements.
The PM will organise regular assessment meetings with all the partners, in
addition to the PB meetings. These meetings will serve as preparation for the
EC review and the necessary periodic reports. The purpose of these meetings
will be to report on the progress so far and to redefine (if necessary) the
Description of the Action (DoA) for the remaining part of the GA. The PB will
regularly handle risk management and contingency plans. The PM and the PB will
jointly be in charge for preparing for the regular project reviews with the
EU. Specific access will be setup for the project reviewers (to the Project
intranet, code repository and the KIM database) to review the Project
progress. The consortium proposed the EC to organise three reviews during the
Project lifecycle. The European Commission has finally accepted two reviews,
that is: a first review on M12 and the final review on M30.
**5G ESSENCE internal information flows:** The strategy will be to keep the
partners fully informed about the Project status, the planning and other
issues that are important with regard to maximising the transparency and
increasing synergy of co-operation and efficiency. The communication between
partners having closely related work will be more frequent and informal (in
_ad-hoc_ meetings, phone conferences and by e-mail) including on-site visits
of personnel involved when appropriate. Informal technical interim reports
covering topics such as technical requirements, architectural issues,
progressing techniques, measurements/simulation practices and so on will be
developed -if needed- and will be distributed among the Project partners. In
increasing level of formality, WPLs will regularly call for WP phone calls. As
a reference, WP- _level_ phone calls will be conducted on a monthly basis, _at
least_ . The corresponding WPL will be responsible for fixing the agenda,
which will usually include time slots for discussions on upcoming
Deliverables. The Deliverable Editor will lead this part of the discussion,
while the WPL will lead the general technical discussions around the on-going
tasks. After the phone call, the WPL will release the minutes in copy to the
TM. In this way, each WPL will report regularly to the TM and will give an
overview of the work progress and any arising issues. These lines of
communication will ensure that any major deviation from the work plan will be
spotted immediately and prompt appropriate corrective action can be taken.
The formal flow of information will take place during Technical Meetings
(face-to-face/F2F), which will be conducted approximately three times a year.
The objectives of these meetings will be to discuss technical issues and
overall project progress. Representatives will report to the rest of partners,
thus highlighting any divergence from the proposed plan and schedule. The PM
will be responsible (with the assistance of TM and WPLs) for the preparation
of the agendas, co-ordination of the meetings, and production of the minutes.
On the other side, a project collaborative infrastructure, accessible through
the web, has been set-up and used for distribution of documents among
partners. This infrastructure will enable all partners to deposit and retrieve
all relevant information regarding the Project. Furthermore it will include
the capability of collaborative edition of documents, thus improving joint
document management within the project. The Project Coordinator has
established and will maintain this infrastructure. More detailed information
is given in the related 5G ESSENCE deliverable _D1.1 (“Project Website”)._
**Deliverables handling:** Deliverables will be elaborated as a joint effort
among the partners involved in the related WP. Their completion will be under
the responsibility of the relevant WPL, who will be assisted by the
Deliverable Editor identified in the workplan and will count on the
contributions from the other partners. The Deliverable Editor will establish a
calendar for the elaboration of the document well in advance of the submission
deadline, considering several rounds of contributions and rounds for
discussion and refinement.
Once the Deliverable Editor and WPL feel that the document is completed, it
will be forwarded to the TM, who will check that it is compliant with the
quality assurance (QA) directives. If needed, the document will return to the
WP domain for complete alignment with the desired quality. Once approved by
the TM, the document will be forwarded to PB for formal approval before
submission to EC. If comments arise from PB, again the document will return to
WP domain and a new iteration will be established. When defining the calendar,
the following periods need to be considered: (i) PB validation process starts
10 days in advance of official deliverable submission deadline; (ii) TM review
process starts 20 days in advance of official deliverable submission deadline.
Therefore, 10 days are enabled for TM to review and comment on the document
and the WP to address the comments in case, before the document is forwarded
to PB. Editorial guidelines (not only for Deliverables but for all types of
documents used in the project), templates and document naming policies will be
defined and will be available in the document management platform.
**Information dissemination outside the 5G ESSENCE domain:** One of the
objectives of the 5G ESSENCE is to raise awareness and impact on a wider
community. Consequently, a specific task (T8.1) has been considered in the
workplan and a specific dissemination plan with concrete goals for
dissemination that will oblige each individual partner to undertake certain
activities and actions will be defined, as in the related deliverable _D8.1
(“Plans for Dissemination, Communication, Standardisation and Exploitation,
Interaction with 5G-PPP”_ ). The dissemination processes are detailing the 5G
ESSENCE ambitions and means, and describing the overall processes encompassing
plans, execution, review and approval, reporting and impact analysis. These
will be followed as specified in the CA. Decision on the dissemination level
of the project foreground will be made by the PB. Any objection to the planned
dissemination actions shall be made in accordance with the Grant Agreement.
**Technical problems and conflict resolution:** Technical problems will be
discussed on the level of each WP. The WPL leader will lead discussions and
make decisions, while ensuring that the work plan is respected. The WPL shall
report to the TM technical problems or solutions that have or may have
influences on other WPs. If a problem cannot be solved on the level of the WP,
the TM is responsible of taking a decision to solve the problem amicably. In
the unlikely event of conflict not being resolved at TM level, PM and PB will
be responsible to mediate in the conflict and to facilitate an end to the
conflict. They will act in accordance to what will be established in the
Consortium Agreement.
**Consortium Agreement (CA):** As mandated by EU project contractual
obligations, all partners of the consortium needed to sign a Consortium
Agreement before the contract with the European Commission is executed. Role
of the Project Management (and especially of the PM together with the PB) is
to modify and/or update the preestablished CA, based on the possibly changing
conditions in the Projects (change of partners, “shift” of responsibilities,
change of technical boundary conditions, etc.). The purpose of the actual CA
is to specify the internal organization of the work between the partners, to
organise the management of the Project, to define rights and obligations of
the partners, including -but not limited to- their respective liability and
indemnification as to the work performed under the Project, and more generally
to define and rule the legal structure and functioning of the consortium.
Moreover, the CA also addresses issues such as appropriate management of
knowledge in the sense of protection of know-how and more generally of any
knowledge and relevant intellectual property rights in any way resulting from
the Project. The CA also has the purpose to integrate or “supplement” some of
the provisions of the Grant Agreement, for example those concerning Access
Rights; as to the ruling of certain matters, the CA may set out specific
rights and obligations of the partners, which may integrate or supplement, but
which will under no circumstance be in conflict with those of the GA.
# 3 Knowledge Management and Protection Strategy
## 3.1 Management of Knowledge
Information flows within the Project both vertically and horizontally. The
“vertical flow” of information comprises principally the administrative issues
(e.g., financial progress reports, consolidated reports, meeting minutes and
cost claims/advance payments), whereas the scientific and technical
information flow is generally more appropriate to a less formal and horizontal
process. The core of the information exchange is the 5G ESSENCE web portal
that is visible to 5G ESSENCE partners (also known as the _Collaborative
Working Environment_ ). Any collaborating partners will acquire free access on
a confidential basis to all items displayed in the KM database, unless
additional _ad-hoc_ restrictions have been negotiated, in advance. This
platform also includes basic workflow tools to automate and simplify the
working procedures. For the Project partners, the website provides full access
to all achievements in detail, whereas the annual report, publications, and
sequence search sections will be open also to the public. Project summary,
general information and public reports have will be made available for
everybody on the Internet, also as a means to effectively communicate and
coordinate, _if possible_ , with parties outside the consortium (such as other
related 5G-PPP projects or the European Commission (EC)). The EC will receive
a special access code to access the necessary reports as well as to access
prototypes on the review process, _if and/or where necessary_ . The database
and periodic reports will greatly help in assembling the Annual and Interim
Reports for the Commission.
More detailed information about the exact repositories of the Project,
corresponding to a public website accessed by any third party and to a private
website accessed by authorised physical and/or legal persons is given in the
already submitted deliverable _D1.1 (“Project Website”)_ .
5G ESSENCE will continuously host a comprehensive public website (
__http://www.5g-essence-h2020.eu_ _ ) that will contain all relevant
information about the Project.
A public section allows sharing information and documents among all partners,
also including any other “third party” (i.e., physical and/or legal persons)
that may express interest to access such data and receive information about
the scope and the achievements of the 5G ESSENCE- _based_ effort. The public
section presents the specific aims, the vision and objectives as well as the
goals, the plan, the development(s) and the intended achievements of the
Project. It is also used to publish the public deliverables and the papers (as
well as other works and/or relevant presentations) that are to be presented or
accepted in international conferences, workshops, meetings and other similar
activities towards supporting a proper dissemination and exploitation policy
of the Project).
Furthermore it includes references to the related 5G-PPP context, as promoted
by the European Commission, and potentially affecting progress of the 5G
ESSENCE effort. In addition, the public part includes an indicative
description of the profiles of the involved 5G ESSENCE partners as well as a
part for links to other informative areas. There is also an explicit link to a
private part of the website, accessible only by the partners or the “
_beneficiaries_ ”) of the Project, by using specific credentials (
__http://programsection.oteresearch.gr_ _ ) . _**Figure 2** _ provides an
indicative snapshot of the existing part of the public website.
The private part of the website serves as the “project management and
collaboration platform” bearing (among others) advanced document management
features (e.g. document versioning/history, documents checkin/out/locking,
etc.) and a powerful search functionality to ensure efficient work and
collaboration among partners.
The 5G ESSENCE consortium is always proactively taking supplementary measures
to raise awareness and encourage the implementation of the technical,
business, social and all other concepts developed though the development of
the public website.
**Figure 2:** 5G ESSENCE Public Section - _Welcome Screen_
## 3.2 Ethics and Management of IPRs
The 5G ESSENCE consortium is to respect the framework that is structured by
the joined provisions of:
* The _European Directive 95/46/EC_ (“ _Protection of personal data”_ ) 17 , and;
* _Opinion 23/05/2000 of the European Group on Ethics in Science and New Technologies concerning “Citizens Rights and New Technologies: A European Challenge”_ 18 .
The 5G ESSENCE partners will also abide by professional ethical practices and
comply with the _Charter of Fundamental Rights of the European Union_ 19 .
It is important to mention, _at this part of the work_ , that detailed issues
governing Ethics have been fully covered in the framework of the Deliverable
D9.3 (“ _General Ethics Issues – Requirement No.3_ ”) that has been requested
by the European Commission as an additional composition in the context of the
additional WP9, following to the conclusion of the GA preparation.
In the same context, issues about protection of personal data have been
covered within the Deliverable D9.2 (“ _Protection of Personal Data (POPD) –
Requirement No.2_ ”).
Both D9.2 and D9.3 have provided detailed provisions for the respective
matters they are dealing with.
Besides the context provided in D9.2, certain guidelines will be implemented
in order to limit the risk of data leaks and these include the following:
* Keep anonymised data and personal data of respondents separate;
* Encrypt data if it is deemed necessary by the local researchers;
* Store data in at least two separate locations to avoid loss of data;
* Limit the use of USB flash drives;
* Save digital files in one the preferred formats, _and_ ;
* Label files in a systematically structured way in order to ensure the coherence of the final dataset.
The 5G ESSENCE consortium recognises the importance of IPRs under a basic
philosophy as discussed in the following sections: The general architecture
and scientific results defined during the course of the Project are public
domain research, intended to be used in international fora to advance
technological development and scientific knowledge. Basic methods,
architectures and functionalities should be available for scrutiny, peerreview
and adaptation. Only this way can industry and standardisation groups accept
the results of 5G ESSENCE and this is a procedure already applied in many
similar cases of research projects, until today. IPR will be managed in line
with a principle of equality of all the partners towards the foreground
knowledge and in full compliance with the general Commission policies
regarding ownership, exploitation rights and confidentiality.
Valuable IPRs that might come up during the course of the Project from the
work in the areas of new technological innovations with direct product use,
shall be protected by the consortium and/or single partner entity within the
Project. The IPRs shall be shared with reasonable rules, and the _H2020_
contract rules shall be strictly adhered to.
For handling patents, the consortium will also apply proven methods used in
previous EC projects. The partners will inform the consortium of technologies,
algorithms, etc. that they offer for use in the WPs that they have patented,
are in the process of patenting, or consider patenting. Similarly, if
patentable methods and techniques are generated within Project- _based_
activities, the patenting activities will aim to protect the rights of all
partners participating in these specific activities. Lists of patents related
to the Project, whether adopted, applied or generated will be maintained for
reference, and are to be included in reports submitted to the Commission.
The Consortium Agreement (CA) provides rules for handling confidentiality and
IPR to the benefit for the 5G
ESSENCE consortium and its partners. All Project’s documentation will be
stored electronically and as paper copies. Classified Documents will be
handled according to proper rules with regard to classification (as described
above), numbering and locked storing and distribution limitations.
In general, knowledge, innovations, concepts and solutions that are not going
to be protected by patent applications by the participants will be made public
after agreement between the partners, to “allow others to benefit” from these
results and exploit them. However, where results require patents to show the
impact of 5G ESSENCE, we will perform freedom to operate searches to determine
that this does not infringe on patents belonging to others.
The policy, that will govern the IPR management in the scope of 5G ESSENCE, is
driven by the following principles, which will be detailed in the Consortium
Agreement: (i) Policy for Ownership and Protection of knowledge; (ii)
Dissemination and Use policy; (iii) Access rights for use of knowledge; (iv)
Confidentiality; (v) Ownership of results / joint ownership of results /
difficult cases (i.e. pre-existing know-how so closely linked with result
difficult to distinguish pre-existing know-how and result); (vi) Legal
protection of results (patent rights); (vii) Commercial exploitation of
results and any necessary access right; (viii) Commercial obligation; (ix)
Relevant Patents, know-how, and information Sublicense, _and_ ; (x )Pre-
existing know-how excluded from contract.
Nevertheless, many specific IPR cases, that will need a concrete solution from
the bases previously fixed, may also exist. In these conflict situations, the
General Assembly will be the responsible Body to arbitrate a solution.
Furthermore, the IPR strategy and the updates will be monitored by the
Knowledge and Innovation Management (KIM) team and during the periodic
meetings; any IPR updates will be presented and approved upon consensus of the
KIM team.
# 4 Open Access Policy
Usually, academic research seems to be focused on questions of essential
scientific interest, the so-called “ _basic research”_ . This is generally
intended to merely disclose new scientific and technical knowledge through
publications. On the other hand, the _applied research_ performed by the
industry is normally aimed at commercialising the resulting innovation and,
therefore, it is intended to increase the company value. To this end, research
results are commonly protected through patents and trade secrets 26 .
According to this specific kind of distinction, a “publication” is the most
suitable means of knowledge dissemination for research
organizations/universities (ROs) as it permits the fastest and open diffusion
of research results, for the wider benefit of the research communities. On the
contrary, patents offer the industry the strongest protection to commercialise
their innovation and recover the costs of the research investments 27 .
However, this scenario has been critically changed, and expectations of _“how
ROs create and manage their knowledge”_ are changing rapidly, as this is
increasingly considered by academic personnel as a source of income. This is
also due to the fact that universities are encouraged to collaborate with
private companies on research projects in different areas, which constitutes
an expansion of their research interests into other sectors, such as
biotechnology, nanotechnology, ICT and so forth, _just to mention a few_ . As
a consequence, the “boundary” between scientific and applied research has
“blurred” and, while the industry dissemination approach did not go through
any significant transformation, the ROs' strategy moved away from the
traditional “publishing”. ROs have in fact started focusing on the opportunity
to patent 28 research results, and extract as much value as possible from
intellectual property (IP).
The two main means to “bring” technical and scientific knowledge to the public
are patent applications 29 and journal publications 30 , 31 . With the
advent of the Internet, two alternative means are also available for
scientists and research companies either to maximise their IP value or to
disseminate scientific and technical knowledge. These are: The defensive
publications 32 and the **_Open Access_ model ** 33 , 34 . Public
Internet is an emerging
26
See the detailed context proposed in the approach:
European IPR Helpdesk (July 2013): _Fact Sheet: Patenting v. publishing._
Available at : __https://www.iprhelpdesk.eu/FS_Patenting_v._publishing_ _ .
27
See, for example: Sapsalisa, E., van Pottelsberghe de la Potterie, B., and
Navon, R. (2006, December): Academic versus industry patenting: An in-depth
analysis of what determines patent value. _Research Policy_ , _35_ (10),
pp.1631-1645.
**28** d’Erme, R. (2013). _“Utility Models: A useful alternative to patents
for small businesses”._ European IPR Helpdesk Bulletin N°8, January - March
2013. Available at:
__http://www.iprhelpdesk.eu/sites/default/files/newsdocuments/IPR_Bulletin_No8_0.pdf#page=3_
_ .
29
Patenting entails the grant of a set of rights to exclusively use a certain
invention (i.e. product or process) for a certain period of time. In return
for this monopoly, the IP system asks the patent owner to disclose the
technical information describing the invention in order for others to access
it and continue to innovate based on it.
30
Dissemination of scientific knowledge through publication is one of the most
common and rapid instruments. Publishing, however, is not always as timely as
it may appear as the peer review process can delay the final article
publication. Moreover, publishers are often not prone to pay authors of
scientific articles who, in turn, are willing to publish for reasons related
to their career path, besides the primary wish of disseminating knowledge.
31
The protection granted by the IP system to an article or publication is
copyright, which arises automatically when the researcher writes it. It is
worth mentioning that copyright only protects expression of the words
contained in the text and its originality, but not the idea underlying the
research findings. Therefore, the best ways to prevent others from reusing the
inventions stemming from the research is patenting or keeping it as a secret.
32
A _**defensive publication** _ , or _defensive disclosure_ , is an
intellectual property strategy used to prevent another party from obtaining a
patent on a product, apparatus or method for instance. The strategy consists
in disclosing an enabling description and/or drawing of the product, apparatus
or method so that it enters the public domain and becomes prior art.
Therefore, the defensive publication of perhaps otherwise patentable
information may work to defeat the novelty of a subsequent patent application.
Unintentional defensive publication by incidental disclosure can render
intellectual property as prior art. One reason why companies decide to use
defensive publication over patents is cost.
More Details can be found at _:_ __http://www.defensivepublications.org_ _ .
For a more concrete approach upon “defensive publishing” also see:
Adams, S., and Henson-Apollonio, V. (2002, September): Defensive publishing: a
strategy for maintaining intellectual property as public goods. _ISNAR
(International service for National Agricultural Research) Briefing Paper
No.53._ Available at : __ftp://ftp.cgiar.org/isnar/publicat/bp-53.pdf_ _ .
functional medium for globally distributing knowledge, also being able to
significantly modify the nature of scientific publishing as well as the
existing system of quality assurance.
Enabling societal actors to interact in the research cycle improves the
quality, relevance, acceptability and sustainability of innovation outcomes by
integrating society’s expectations, needs, interests and values. Open access
is a key feature of Member States’ policies for responsible research and
innovation by making the results of research available to all and by
facilitating societal engagement 35 . Businesses can also benefit from wider
access to scientific research results; small and medium-sized enterprises
(SMEs), in particular, can improve their capacity to innovate. Policies on
access to scientific information can also facilitate access to scientific
information for private companies. Open access to scientific research data 36
enhances data quality, reduces the need for duplication of research, speeds up
scientific progress and helps to combat scientific fraud 37 .
In the context of the 5G ESSENCE Project, expected publications are to be
published according to the _**Open Access (OA)** _ principles 38 . The
consortium will make use of both “green” (or self-archiving) and “gold” open
access options to ensure Open Access to an appropriate number of the
publications that are to be produced during the life-time of the Project.
Almost all the “top publications” in the fields related to the Project are
expected to be published via IEEE, Springer, Elsevier or ACM that provide
authors with both “gold” - _with either hybrid publication or open access
journals strategy_ \- and “green” open access options.
Major achievements of the Project will be considered to be published in a
“gold” open access modality in order to “increase” the target audience. This
implies the publication on Open Access Journals 39 or on Hybrid
33
_**Open Access (OA)** _ refers to the practice of granting free Internet
access to research articles. This model is deemed to be an efficient system
for broad dissemination of and access to research data and publications, which
can indeed accelerate scientific progress.
_Open access (OA)_ refers to online research outputs that are free of all
restrictions on access (e.g. access tolls) and free of many restrictions on
use (e.g. certain copyright and license restrictions). Open access can be
applied to all forms of published research output, including peer-reviewed and
non peer-reviewed academic journal articles, conference papers, theses, book
chapters, and monographs. Also see the discussion proposed in: Suber, P.
(2015): _Open Access Overview_ available at :
__http://legacy.earlham.edu/~peters/fos/overview.htm_ _ .
Two degrees of open access can be distinguished: _gratis_ open access, which
is online access free of charge, and _libre_ open access, which is online
access free of charge plus various additional usage rights. These additional
usage rights are often granted through the use of various specific Creative
Common licenses. Libre open access is equivalent to the definition of open
access in the _Budapest Open Access Initiative_ , the _Bethesda Statement on
Open Access Publishing_ and the _Berlin Declaration on Open Access to
Knowledge in the Sciences and Humanities_ .
The broader context is extensively discussed, _for example_ , within: Suber,
P. (2012): _Open access_ . MIT Press.
34
Also see: Hess, T., Wigand, R.T., Mann, F., and von Walter, B. (2007): _Open
Access & Science Publishing, Management Report 1/2007 - Results of a Study on
Researchers’ Acceptance and Use of Open Access Publishing. University of
Munich in cooperation with University of Arkansas. Available at: _
__http://openaccess-
study.com/Hess_Wigand_Mann_Walter_2007_Open_Access_Management_Report.pdf_ _ .
35
European Commission (2012, July): _Commission Recommendation of 17.07.2012 on
access to and preservation of scientific information [C(2012), 4890 final]._
Available at:
__https://ec.europa.eu/research/sciencesociety/document_library/pdf_06/recommendation-
access-and-preservation-scientific-information_en.pdf_ _ .
36
Economic and Social Research Council (2010). _ESRC research data policy_ .
Available at: __www.esrc.ac.uk/aboutesrc/information/data-policy.aspx_ _ .
37
High Level Expert Group on Scientific Data (2010, October): _Final Report:
“Riding the wave: How Europe can gain from the rising tide of scientific
data”_ . Available at :
__http://cordis.europa.eu/fp7/ict/e-infrastructure/docs/hlg-sdi-report.pdf_ _
.
38
See further detailed discussion about “Open Access” as IT appears below, in
the continuity of the present section.
39
_Open access (OA) journals_ are scholarly journals that are available online
to the reader “without financial, legal, or technical barriers other than
those inseparable from gaining access to the internet itself”. They remove
price barriers (e.g. subscription, licensing fees, pay-per-view fees) and most
permission barriers (e.g. copyright and licensing restrictions). While open
access journals are freely available to the reader, there are still costs
associated with the publication and production of such journals. Some are
subsidized, and some require payment on behalf of the author. Some open access
journals are subsidized and are financed by an academic institution, learned
society or a government information center. Others are financed by payment of
article processing charges by submitting authors, money typically made
available to researchers by their institution or funding agency. These two are
referred to respectively as "gold" and "platinum" models to emphasize their
distinction, although other times “gold” OA is used to refer to both paid and
unpaid OA. Also see, _among others_ , the broader discussion proposed in:
__https://en.wikipedia.org/wiki/Open_access_journal_ _ .
Journals 40 with OA agreement. The Article Processing Charges (APCs) that
apply will be covered by the Project budget.
Self-archiving -or “green” open access- peer- _reviewed_ scientific research
articles for dissemination will be published in scholarly journals that
consent self-archiving options compatible with “green” open access, where the
published article or the final peer- _reviewed_ manuscript is archived
(deposited) by the author \- _or a representative in case of multiple authors_
\- in an online repository before, alongside or after its publication. The 5G
ESSENCE Project will give preference to those journals that allow pre-print
self-archiving, in order to “maximise” the visibility of Project outcomes.
In fact, the 5G ESSENCE consortium follows the guidelines set forth by the EU
on its mandate for open access publications to all peer- _reviewed_ scientific
publications. In order to effectively comply and “guide” the partners to
achieve such a high-promising goal, an _**Open Access publication policy and
strategy** _ 41 is to take place and affect Project’s governing documentation
and further will be enforced and monitored by the Quality Manager (i.e., the
Project Coordinator).
According to this kind of policy, all scientific journals resulting from the
Project will be made “open access”
(with any exception needed to be approved by the Project Coordinator and
validated by the EU Project OfficerPO). Further, for other scientific
publications appearing in conference proceedings and other peer- _reviewed_
books, monographs or other “grey literature”, will be made available to the
general public through open access archives with very flexible licensing
(e.g., creative commons licenses) for the scientific community (open access
archives, such as arXiv ( __www.arxiv.org_ _ ) , researchgate (
__www.researchgate.net_ _ ) , CiteSeerX ( __citeseerx.ist.psu.edu_ _ ) can
be used for this purpose) 42 .
In an effort to “maximise” the expected impact with the scientific results and
associated data and the software (SW) code produced in the Project, the 5G
ESSENCE consortium will create a dedicated code/data repository in a
collaborative open source code management tool (e.g., GitHub 43 ) for 5G
ESSENCE to release all the mature software and other data associated to the
scientific publications. This will allow the broader community to “access” the
open source software and the related data and/or tools, which is used to
derive the scientific results presented in the articles and magazines.
40
A _hybrid open access journal_ is a subscription journal in which some of the
articles are open access. This status typically requires the payment of a
publication fee (also called an article processing charge or APC) to the
publisher.
41
More details for the broader European scope in the related thematic area can
be found at:
__https://ec.europa.eu/research/openscience/index.cfm?pg=access
§ion=monitor _ _ .
42
Publication outputs will be placed either on arXiv or an analogous archive (in
accordance to the Registry of Open Access Repositories (ROAR)) and links from
the project website to these Open Access publications will be published
timely, in order to maximise impact and visibility of 5G ESSENCE results and
its activities.
43
_**GitHub** _ is a Web-based Git repository hosting service. It offers all of
the distributed revision control and source code management (SCM)
functionality of Git as well as adding its own features. Unlike Git, which is
strictly a command-line tool, GitHub provides a web-based graphical interface
and desktop as well as mobile integration. It also provides access control and
several collaboration features such as bug tracking, feature requests, task
management and wikis for every project.
See, _for example_ : Williams, A. (2012, July): G _itHub pours Energies into
enterprise – Raises $100 Million From Power VC_ _Andreessen Horowitz_ , Tech.
Crunch. Available at: __http://techcrunch.com/2012/07/09/github-pours-
energies-into-enterprise-raises-100-million-from-power-vc-andreesenhorowitz/_
_ .
GitHub offers both plans for private repositories and free accounts, which are
usually used to host open-source software projects (
__https://github.com/about/press_ _ ) . In recent years, GitHub has become
the largest code host in the world, with more than 5M developers collaborating
across 10M repositories. Numerous popular open source projects (such as Ruby
on Rails, Homebrew, Bootstrap, Django or jQuery) have chosen GitHub as their
host and have migrated their code base to it. GitHub offers a tremendous
research potential. As of 2015, GitHub reports having over 11 million users
and over 29.4 million repositories ( __https://github.com/about/press_ _ ) ,
thus making it the largest host of source code in the world.
An interesting approach for the latter comment is discussed in: Gousios, G.,
Vasilescu, B., Serebrenik, A. and Zaidman, A. (2014). _Lean GHTorrent: GitHub
Data on Demand, in_ MSR-14 Proceedings (May 31- June 01, 2014), Hyderabad,
India.
ACM Publications.
For a wider informative scope about GitHub, also see the discussion presented
in : __https://en.wikipedia.org/wiki/GitHub_ _ .
For a variety of reasons, this sort of free and unrestricted online
availability within the OA framework can be economically feasible, offers to
any potential reader astonishing power to “find and make use” of relevant
literature, while it provides authors and their works massive new visibility,
readership and impact 20 .
The 5G ESSENCE will also produce specific outcomes in terms of implementation
of individual software components which will be used in scientific
publications together with the data collected during experiments done within
the complete Project’s lifetime. To make software and data used in
publications available to the related (academic, business or other) community,
such software and data will be made open source or subject to very flexible
licensing and available whereby different channels. This potentially includes
the creation of repositories in open source code management tools - _such as
GitHub, or an “equivalent” one_ \- where to store the software developed which
is in a “mature” stage and updated from time to time, as new stable releases
of the code are available. Furthermore, since the 5G ESSENCE consortium aims
to maximise the impact inside the related SDN and NFV communities, the
software will be also made available inside open source initiatives 21 (for
example: OpenDayLight 22 , OPNFV 23 , etc.) whenever possible and
according to the provisions of both the GA and the CA documents. With this
kind of intended policy, the 5G ESSENCE consortium will disseminate Project
_based_ achievements to an audience as wide as possible, and will so allow
other parties to replicate the results presented in scientific publications.
Open Access refers to the practice of granting free Internet access to
research articles. This model is deemed to be an efficient system for broad
dissemination of and access to research data 24 and publications, which can
indeed accelerate scientific progress.
Although this model foresees that the knowledge dissemination is on free-of-
cost basis, this does not mean that the publication process is entirely free
of costs. The underlying philosophy, in fact, focuses on the shift of costs
from the reader to the author/publisher, in order to readily access and
disseminate publications.
**Open Access (OA)** can be defined 25 as the practice of providing on-line
access to scientific information that is “free of charge” to the end-user and
that is re-usable. The term “scientific” refers to all academic disciplines;
in the context of research and innovation activities, “scientific information”
can refer to: _(a)_ Peer- _reviewed_
scientific research articles (published in scholarly journals) 26 , or;
_(b)_ research data (i.e.: data underlying publications, curated data and/or
raw data).
Establishing open access as a valuable practice ideally requires the active
commitment of each and every discrete/individual producer of scientific
knowledge. Open access contributions include original scientific research
results, raw data and metadata, source materials, digital representations of
pictorial and graphical materials and scholarly multimedia material.
Open access contributions have to satisfy/fulfil two conditions 27 : (i) The
author(s) and right holder(s) of such contributions grant(s) to all users a
free, irrevocable, worldwide, right of access to, and a license to copy, use,
distribute, transmit and display the work publicly and to make and distribute
derivative works, in any digital medium for any responsible purpose, subject
to proper attribution of authorship (community standards, will continue to
provide the mechanism for enforcement of proper attribution and responsible
use of the published work, as they do now), as well as the right to make small
numbers of printed copies for their personal use, _and_ ; (ii) A complete
version of the work and all supplemental materials, including a copy of the
permission as stated above, in an appropriate standard electronic format is
deposited (and thus published) in at least one online repository using
suitable technical standards (such as the Open Archive definitions) that is
supported and maintained by an academic institution, scholarly society,
government agency, or other well established organization that seeks to enable
open access, unrestricted distribution, interoperability, and long-term
archiving.
The philosophy underlying the open access model is to introduce barrier-free,
cost-free access to scientific literature for readers 28 . In the past,
restrictions to free access of scientific publications were accepted, as the
subscription model was the only practically possible option, as printed
journals were the only means of disseminating validated scientific results 53
. While open access advocates free _dissemination_ of scientific knowledge,
this does not necessarily imply that no costs are involved in the publishing
process. Open access does not indulge in the illusion of an entirely cost-free
publication process. Communication of scientific results has always been paid
out of research funds, one way or another, either directly or indirectly, via
institutional overhead charges. That does not change in an open access model.
The OA model focuses on taking the burden of costs off the subscriber’s
shoulders, often by shifting the costs from the reader to the author, so that
payment for the process of peer review and publishing is made on behalf of the
author, rather than the reader.
Conformant to the OA- _based_ approach, the following options can be
distinguished: _“Open access to scientific publications_ which is discussed in
section _(i)_ below _,_ and; _“open access to research data”_ as discussed in
_sections 4.1 and 4.2,_ below:
## 4.1 Open Access to Scientific Publications
**_Open access to scientific publications_ ** refers to “free-of-charge”
online access for any potential user. Legally binding definitions of “open
access” and “access” in this context do not practically exist, but
authoritative definitions of open access can be found in key political
declarations on this subject, for instance the _Budapest Declaration of 2002_
( __http://www.budapestopenaccessinitiative.org/read_ _ ) or; the _Berlin
Declaration 54 of 2003 _ (
__http://openaccess.mpg.de/67605/berlin_declaration_engl.pdf_ _ ) .
These definitions describe “access” in the context of open access as including
not only basic elements such as “the right to read, download and print”, but
also “the right to copy, distribute, search, link, crawl, and mine”.
There are two main routes towards open access to publications 55 :
* **Self-archiving / “green” open access** means that the published article or the final peer- _reviewed_ manuscript is archived (deposited) by the author -or an authorized representative in case of multiple authors- in an online repository before, alongside or after its publication. Some publishers request that open access be granted only after an “embargo” period has elapsed 56 .
The “green” open access is the practice of placing a version of an author’s
manuscript into a repository, making it freely accessible for everyone. The
version that can be deposited into a repository is dependent on the funder or
publisher. Unlike gold open access, the copyright for these articles usually
“sits” with the publisher of, or the society affiliated with, the title and
there are restrictions as to how the work can be reused. There are individual
self-archiving policies by journal or publisher that determine the terms and
conditions, e.g. which article version may be used and when the article can be
made openly accessible in the repository (also called an embargo period). A
list of publishers’ self-archiving policies can be found on the SHERPA/RoMEO
database 57 .
Scholars and researchers need the tools and the assistance to deposit their
refereed journal articles in open electronic archives, a practice usually
called as _“self-archiving”_ . When these archives conform to standards
created by the Open Archives Initiative 58 , then search engines and other
tools can “treat the separate archives as one”. Users then need not know which
archives exist or where they are located in order to find and make use of
their contents.
* **Open access publishing / “gold” open access** means that an article is immediately provided in open access mode as published. In this specific model, the payment of publication costs is shifted away from readers paying via subscriptions 59 . The business model most often encountered is based on one-off payments by authors. These costs (often referred to as Article Processing Charges - APCs) can usually be borne by the university or research institute to which the researcher is affiliated, or to the funding agency supporting the
54
Following to the spirit of the _Declaration of the Budapest Open Access
Initiative_ , the _Berlin Declaration_ _on Open Access to Knowledge in the
Sciences and Humanities_ has been made in order to promote the Internet as a
functional instrument for a global scientific knowledge base and human
reflection and to specify measures which research policy makers, research
institutions, funding agencies, libraries, archives and museums need to
consider for such purpose. According to the proposed framework, new
possibilities of knowledge dissemination not only through the classical form
but also and increasingly through the open access paradigm via the Internet
had to be supported. “Open access” has been defined as a comprehensive source
of human knowledge and cultural heritage that has been approved by the
scientific community. In order to realize the vision of a global and
accessible representation of knowledge, the future Web needed to be
sustainable, interactive, and transparent. Content and software tools needed
to be openly accessible and compatible.
55
See, in a “joint” approach the discussion proposed in:
Harnad, S., Brody, T., Vallières, F., Carr, L., et _al._ (2008): The
Access/Impact Problem and the Green and Gold Roads to Open Access: Un Update.
_Elsevier Serials Review, 34_ (1), pp.36-40.
Also see the framework presented within:
Harnad, S., and Brody, T. (2004, June): Comparing the Impact of open Access
(OA) vs. Non-OA Articles in the Same Journals. _D-Lib Magazine, 10_ (6).
Available at: __http://www.dlib.org/dlib/june04/harnad/06harnad.html_ _ .
56
_**Green OA** _ foresees that the authors deposit (self-archive) the final
peer-reviewed manuscript in a repository (open archive) to be made available
in open access mode, usually after an embargo period allowing them to recoup
the publishing costs (e.g. via subscriptions or pay per download).
57
For further details, see : __http://www.sherpa.ac.uk/romeo/index.php?la=en
&fIDnum=|&mode=advanced _ _ .
58
For more relevant information see, for example :
__http://www.openarchives.org_ _ .
59
For this other model named _**Gold OA** _ , costs of publishing are covered
usually by the publisher so that research articles are immediately available
free of charge upon publication.
research. In other cases, the costs of open access publishing are covered by
subsidies or other funding models.
Gold OA makes the final version of an article freely and permanently
accessible for everyone, immediately after publication. Copyright for the
article is retained by the authors and most of the permission barriers are
removed. Gold OA articles can be published either in fully OA journals (where
all the content is published OA) or hybrid journals (a subscription- _based_
journal that offers an OA option which authors can chose if they wish). An
overview of fully OA journals can be found in the Directory of Open Access
Journals 29 (DOAJ). Scholars and researchers need the means to initiate a
new generation of journals committed to open access and, _consequently_ , to
help existing journals that elect _to make the transition to open access_ .
Since journal articles should be disseminated as widely as possible, such new
journals will no longer invoke copyright to restrict access to and use of the
material they publish. Instead, they will use copyright and other tools to
ensure permanent open access to all the articles they publish. Because price
is a barrier to access, these new journals will not charge subscription or
access fees, and will turn to other methods for covering their expenses. There
are many alternative sources of funds for this purpose, including the
foundations and governments that fund research, the universities and
laboratories that employ researchers, endowments set up by discipline or
institution, friends of the cause of open access, profits from the sale of
add-ons to the basic texts, funds freed up by the demise or cancellation of
journals charging traditional subscription or access fees, or even
contributions from the researchers themselves. There is no need to favor one
of these solutions over the others for all disciplines or nations, and no need
to stop looking for other alternatives.
_**Hybrid model** _ – While several existing scientific publishers have
converted to the open access publishing model, such conversion may not be
viable for every publisher. A third _("hybrid")_ model of open access
publishing has therefore arisen. In the hybrid model, publishers offer authors
the choice of paying the article processing fee and having their article made
freely available online, or they can elect not to pay and then only journal
subscribers will have access to their article. The hybrid model offers
publishers of traditional subscription-based journals a way to experiment with
open access and allow the pace of change to be dictated by the authors
themselves 30 .
Public institutions are also very interested in the OA system. The European
Commission is strongly committed to optimising the impact of publicly-funded
scientific research, both at European level ( _FP7, Horizon 2020_ ) and at
Member State level 31 , 32 .
Indeed, the European Commission acts as the coordinator between member states
and within the European Research Area (ERA) in order for results of publicly-
funded research to be disseminated more broadly and faster, to the benefit of
researchers, innovative industry and citizens. OA can also boost the European
research, and in particular offers SMEs access to the latest research for
utilisation. The central underlying reasons for an OA system are that:
* The results of publicly-funded research should be publicly available;
* OA enables research findings to be shared with the wider public, helping to create a knowledge society across Europe composed of better-informed citizens;
* OA enhances knowledge transfer to sectors that can directly use that knowledge to produce better goods and services. Many constituencies outside the research community itself can make use of research results. These include small and medium-sized companies that do not have access to the research through company libraries, organizations of professional (legal practices, family doctor practices, etc.), the education sector and so forth.
**_Misconceptions about open access to scientific publications:_ ** In the
context of research funding, open access requirements in no way imply an
explicit obligation to publish results. The decision on whether or not to
proceed to a publication, lies entirely with the grantees. Open access becomes
an issue only _if_ publication is elected as a means of further realizing
dissemination. Moreover, OA does not interfere with the decision to exploit
research results commercially, e.g. through patenting. Indeed, the decision on
whether to publish open access must come after the more general decision on
whether to publish directly or to first seek protection. More information on
this issue is available in the European IPR Helpdesk 33 fact sheet
_“Publishing vs. patenting”_ 65 . This is also illustrated in _**Figure 3** _
, below, showing open access to scientific publication and research data in
the wider context of dissemination and exploitation 34 .
**Figure 3:** Open access to scientific publication and research data in the
wider context of dissemination and exploitation
## 4.2 Open Access to Research Data
**Open access to research data** refers to the right to access and re-use
digital research data under the terms and conditions set out in the Grant
Agreement.
Open Access and Open Access to research data are well aligned concepts related
to enabling access to publicly funded research.
Research data is whatever is produced in research or evidences its outputs.
According to the Europeasn Commission, the term “research data” refers to
information, in particular facts or numbers, collected to be examined and
considered and as a basis for reasoning, discussion, or calculation 35 .
However while research data may generally be quantitative data, such as
numeric facts and statistics, it may also take the form of qualitative data
such as interview transcripts, or digital content including images and video,
and it tends to be discipline specific. The uniting factor is that research
data is not published research output. It is the raw material that leads to
research insights and as such it ultimately contributes to our combined stock
of knowledge. It is not only an incredibly important resource but essential
for academic progress.
In a research context, possible examples of data may comprise statistics,
results of experiments, measurements, observations resulting from fieldwork,
survey results, interview recordings and images. The focus is primarily upon
research data that is available in digital form.
Openly accessible research data can typically be accessed, mined, exploited,
reproduced and disseminated free of charge for the user.
For anyone working on a research project, managing the data produced is an
essential part of research practice that ensures research integrity. Good
research data management makes for good data, good researchers and good
research.
Public institutions are also very interested in the OA system 36 . The
European Commission is strongly committed to optimising the impact of
publicly-funded scientific research, both at European level (FP7, Horizon
2020) and at Member State level 37 . Indeed, the European Commission acts as
the coordinator between member states and within the European Research Area
(ERA) in order for results of publicly-funded research to be disseminated more
broadly and faster, to the benefit of researchers, innovative industry and
citizens. OA can also boost the European research, and in particular offers
SMEs access to the latest research for utilisation.
**The central underlying reasons for an OA system are that:**
* The results of publicly- _funded_ research should be publicly available;
* OA enables research findings to be shared with the wider public, helping to create a knowledge society across Europe composed of better-informed citizens;
* OA enhances knowledge transfer to sectors that can directly use that knowledge to produce better goods and services. Many constituencies outside the research community itself can make use of research results. These include small and medium-sized companies that do not have access to the research through company libraries, organizations of professional (legal practices, family doctor practices, etc.), the education sector and so forth 38 .
## 4.3 The European Approach and 5G ESSENCE within
In the 2012 Communication “ _Towards better access to scientific information_
” 39 the European Commission announced that it would “ _provide a framework
and encourage Open Access to research data in Horizon 2020_ ”. In recent years
the EC have been driving change in this area fueled by important research
reports such as the _2010 Riding the Wave_ 40 which outlined a series of
policy recommendations on how Europe could gain from the rising tide of
scientific data. A subsequent follow-up report entitled as “ _The Data
Harvest: How sharing research data can yield knowledge, jobs and growth”_ 41
targeted policy makers in particular, and offered a warning on _how Europe
must act now to secure its standing in future data markets_ . In response to
growing recognition of the need to better manage and share research data the
_Guidelines on Open Access to Scientific Publications and Research Data in
Horizon 2020_ 42 acknowledge that “ _information already paid for by the
public purse should not be paid for again each time it is accessed or used,
and that it should benefit European companies and citizens to the full_ ”.
Open Access to research data is a high priority for the European Commission.
The term _Open Research Data_ (as a subset of the broader term _Open Data_ )
becomes relevant here. To clarify, when we at Open Knowledge talk about Open
Data the term “open” has a specific meaning as defined in the _Open
Definition_ 43 which sets out principles that define “openness” in relation
to data and content.
“ _Open means anyone can freely access, use, modify, and share for any purpose
(subject, at most, to requirements that preserve provenance and openness)_ ”.
The _Horizon2020_ documentation supports this definition by stating that
openly accessible research data “ _can typically be accessed, mined,
exploited, reproduced and disseminated, free of charge for the user_ ”.
To ensure that projects the European Commission funds become party to opening
up research data Horizon2020 has launched an Open Data Pilot.
Full details are provided in the aforementioned _Guidelines on Data Management
in Horizon 2020 76 _ . This document helps _Horizon 2020_ beneficiaries make
their research data _findable, accessible, interoperable and reusable (FAIR)_
44 , to ensure it is soundly managed. Good research data management is not a
goal in itself, but rather the key conduit leading to knowledge discovery and
innovation, and to subsequent data and knowledge integration and reuse.
The Commission is running a flexible pilot under Horizon 2020 called the _Open
Research Data pilot_ (ORD pilot). The ORD pilot aims to improve and maximise
access to and re-use of research data generated by _Horizon 2020_ projects and
takes into account the need to balance openness and protection of scientific
information, commercialisation and Intellectual Property Rights (IPR), privacy
concerns, security as well as data management and preservation questions.
Under the revised version of the 2017 Work Programme, the _Open Research Data
pilot_ has been extended to cover all the thematic areas of _Horizon 2020_ .
The ORD pilot applies primarily to the data needed to validate the results
presented in scientific publications. Other data can also be provided by the
beneficiaries on a voluntary basis, as stated in their Data Management Plans.
Costs associated with open access to research data, can be claimed as eligible
costs of any Horizon 2020 grant 45 .
The 5G ESSENCE project intends to participate in the _H2020 Open Research Data
Pilot_ , which well compliments the views of the consortium on open access,
open source, and on providing a transparent view of the scientific process,
particularly relevant in science driven by public funds. To participate in
this initiative, a Deliverable consisting of a first draft of the projects
Data Management Plan (DMP) will be produced in month M6 of the Project by WP8
(Deliverable, D8.2), and further evolved as the Project goes on.
Data generated by the Project will mostly consist of measurements and traffic
data from various simulations, emulations in the CESC, Edge DC and CESCM
platforms, and the proof of concept (PoC) experimentation in the 5G ESSENCE
testbeds. Without going into full details of the DMP here, there are several
standards that can be used to store such data as well as providing the meta-
data necessary for third parties. The overall goal is to use, _as much as
possible_ , not only open formats to store the data but also open source
software to provide the scripts and other meta-data necessary to re-use it.
Similar to the software generated by the Project, some of the data generated
may pertain to components, software, or figures considered as confidential by
one or more of the partners. The particular data affected by this will be
described in the DMP and the reasons for maintaining confidentiality will be
provided.
5G ESSENCE will consider using OpenAIRE 46 (in cooperation with re3data.org)
to select the proper open access repository and/or deposit publications for
its research results storage, allowing also for easy linking with the EU-
_funded_ project.
This will increase the accessibility to the obtained results by a wider
community, which can be further enhanced by including the repository in
registries of scientific repositories such as DataCite 47 and Databib 48 .
These are the most popular registries for digital repositories and along with
_re3data.org_ 49 , they are collaborating to provide open data. All issues
related to data management (Data Types, Formats, Standards and Capture
Methods, Ethics and Intellectual Property, Access, Data Sharing and Reuse,
Resourcing, Deposit and Long-Term Preservation, Short-Term Storage) will be
defined and described in detail in Section 5.
# 5 Data Management Plan
## 5.1 European Community Strategic Framework for DMP
The European Commission has early recognised that research data is as
important as publications 83 . It therefore announced in 2012 that it would
experiment with open access to research data 84 . Broader and more rapid
access to scientific papers and data will make it easier for researchers and
businesses to build on the findings of public-funded research 85 .
As a first step, the Commission has decided to make open access to scientific
publications a general principle of _Horizon 2020_ , the EU's Research &
Innovation funding programme for 2014-2020 86 . In particular, as of the
year 2014, all articles produced with funding from _Horizon 2020_ had to be
accessible according to the following options:
* Articles had either immediately to be made accessible online by the publisher (“Gold” open access) - up-front publication costs can be eligible for reimbursement by the European Commission; or
* researchers had to make their articles available through an open access repository no later than six months (12 months for articles in the fields of social sciences and humanities) after publication (“Green” open access). The Commission has also recommended that Member States take a similar approach to the results of research funded under their own domestic programmes 87 . This will boost Europe's innovation capacity and give citizens quicker access to the benefits of scientific discoveries. Intelligent processing of data is also essential for addressing societal challenges.
The _Pilot on Open Research Data in Horizon 2020_ 88 does for scientific
information what the _Open Data Strategy_ 89 does for public sector
information: It aims to improve and maximise access to and re-use of research
data generated by projects for the benefit of society and the economy.
The _G8 definition of_ _Open Data 90 _ states that _data should be easily
discoverable, accessible, assessable, intelligible, useable, and wherever
possible interoperable to specific quality standards, while at the same time
respecting concerns in relation to privacy, safety, security and commercial
interests 91 _ .
The 5G ESSENCE Project intends to participate in the _H2020 Open Research Data
Pilot 92 _ , which well compliments Project’s views on Open Access, open
source 93 , and providing a transparent view of the scientific process,
particularly relevant in science driven by public funds.
83
European Commission (2012, July). _Communication on “Towards better access to
scientific information: Boosting the benefits of public investments in
research” [COM(2012) 401 final, 17.07.2012]._ Available at:
__ http://ec.europa.eu/research/c ience-society/document_library/pdf_06/era-
communication-towards-better-access-toscientific-information_en.pdf _ _ . 84
European Commission (2012, July). Press Release IP/12/790 - _Scientific data:
open access to research results will boost Europe's innovation capacity.
Brussels, July 2012._ Available at : __http://europa.eu/rapid/press-
release_IP-12-790_en.htm_ _ .
85
Among the actions taken under the _“Digital Agenda for Europe” (COM(2010) 245
final/2),_ publicly funded research had to be widely disseminated through open
access publication of scientific data and papers.
86
European Commission (2012, July). Commission Recommendation of 17.07.2012 on
_“An accompanying Commission Recommendation sets out a complete policy
framework for improving access to, and preservation of, scientific
information” [C(2012) 4890 final, 17.07.2012]_ .
87
The goal was for 60% of European publicly- _funded_ research articles to be
available under open access by 2016.
88
European Commission (2013, December). Press release IP/13/1257 - _Commission
launches pilot to open up publicly funded research data. Brussels,
16.12.2013._ Available at : __http://europa.eu/rapid/press-
release_IP-13-1257_en.htm_ _ .
89
See, _for example_ : European Commission (2011, December): _Communication on
“Open data-An engine for innovation, growth and transparent governance”
[COM(2011) 882 final, 12.12.2011]._
90
G8 Science Ministers’ Statement, available at:
__https://www.gov.uk/government/news/g8-science-ministers-statement_ _ . UK
Foreign and Commonwealth Office, _June 13, 2013._
91
To ensure successful adoption by scientific communities, open scientific
research data principles will need to be underpinned by an appropriate policy
environment, including recognition of researchers fulfilling these principles,
and appropriate digital infrastructure. 92
Valuable information produced by researchers in many EU- _funded_ projects
will be shared freely as a result of a _Pilot on Open Research Data_ in
_Horizon 2020_ . Researchers in projects participating in the pilot are asked
to make the underlying data needed to validate the results presented in
scientific publications and other scientific information available for use by
other researchers, innovative industries and citizens. This will lead to
better and more efficient science and improved transparency for citizens and
society. It will also contribute to economic growth through open innovation.
More
This Pilot is an opportunity to see how different disciplines share data in
practice and to understand remaining obstacles, as well as part of the
Commission’s commitment to openness in _Horizon 2020 94 _ .
Projects participating in the _Pilot on Open Research Data in Horizon 2020_
are required to deposit the research data described as below 95 :
* The data, including associated metadata 96 , needed to validate the results presented in scientific publications as soon as possible;
* Other data 97 , including associated metadata, as specified and within the deadlines laid down in a _**data management plan (DMP) 98 ** _ .
Projects should deposit preferably in a research data repository and take
measures to enable third parties to access, mine, exploit, reproduce and
disseminate — free of charge for any user 99 .
_**Data Management Plans (DMPs)** _ are a _key element_ of good data
management. A DMP describes the data management life cycle for the data to be
collected, processed and/or generated by a _Horizon 2020_ project. As part of
making research data findable, accessible, interoperable and re-usable (FAIR)
100 , a DMP should include information on:
* The handling of research data during and after the end of the related project;
* what data will be collected, processed and/or generated;
* which methodology and standards will be applied;
* whether data will be shared/made open access, _and_ ;
* how data will be curated and preserved (including after the end of the related project).
The **main requirements of the _Open Data Pilot_ ** are listed as follows:
* Develop (and update) a Data Management Plan;
* Deposit in a research data repository;
* Make it possible for third parties to access, mine, exploit, reproduce and disseminate data – free of charge for any user;
* Provide information on the tools and instruments needed to validate the results (or provide the tools).
To participate in this initiative, the present _Deliverable D8.2_ consisting
of a first draft of the project’s Data Management Plan has been produced in
month 6 (M6) of the Project by WP8, and further evolved as the Project goes
on.
information about the related Commission’s initiative can be found at:
__http://europa.eu/rapid/press-release_IP-131257_en.htm_ _ .
93
Generally, open source refers to a computer program in which the source code
is available to the general public for use and/or modification from its
original design. Open-source code is meant to be a collaborative effort, where
programmers improve upon the source code and share the changes within the
community. Typically this is not the case, and code is merely released to the
public under some license. Others can then download, modify, and publish their
version (fork) back to the community. Today you find more projects with forked
versions than unified projects worked by large teams. For further reading see,
for example: Lakhani, K.R., von Hippel, E. (2003, June): How Open Source
Software Works: Free User to User Assistance. _Research Policy 32(6),_
pp.923-943. [ _ _doi_ : _10.1016/S0048-7333(02)00095-1_ ] _ as well as other
informative references i n __https://en.wikipedia.org/wiki/Open_source_ _ .
94
The _Pilot on Open Research Data_ in _Horizon 2020_ will give the Commission a
better understanding of what supporting infrastructure is needed and of the
impact of limiting factors such as security, privacy or data protection or
other reasons for projects opting out of sharing. It will also contribute
insights in how best to create incentives for researchers to manage and share
their research data. The Pilot will be monitored throughout _Horizon 2020_
with a view to developing future Commission policy and EU research funding
programs.
95
__https://www.openaire.eu/h2020-oa-data-pilot_ _ .
96
“Associated metadata” refers to the metadata describing the research data
deposited.
97
For instance, curated data not directly attributable to a publication, or raw
data.
98
A DMP may be also referred to as a “Data Sharing Plan”.
99
For example, the _**OpenAIRE project** _ provides a _**Zenodo repository** _ (
__http://www.zenodo.org_ _ ) that could be used for depositing data. Also see
OpenAIRE FAQ ( __http://www.zenodo.org/faq_ _ ) for general information on
Open Access and European Commission funded research.
100
Also see: Wilkinson, M.D., Dumontier, M., et _al._ (2016, March) The FAIR
Guiding Principles for scientific data management and stewardship. _Sci. Data
3,_ 160018\. Available at : __http://www.nature.com/articles/sdata201618_ _ .
The DMP needs to be **updated** over the course of the project whenever
significant changes arise, such as (but not limited to): ▪ New data;
* changes in consortium policies (e.g. new innovation potential, decision to file for a patent), _and_ ;
* changes in consortium composition and external factors (e.g. new consortium members joining or old members leaving).
The DMP should be updated as a minimum in time with the periodic
evaluation/assessment of the
Project 50 .
## 5.2 DMP in the Conceptual Framework of the _H2020_
All project proposals submitted to " _Research and Innovation actions_ " as
well as " _Innovation actions_ " had to include a section on research data
management which is evaluated under the criterion “Impact”. Where relevant,
applicants had to provide a short, general outline of their policy for data
management, including the following issues listed as _(i)-(iv):_
1. _What types of data will the project generate/collect?_
2. _What standards will be used?_
3. _How will this data be exploited and/or shared/made accessible for verification and re-use? (If data cannot be made available, this has to be explained why)._
4. _How will this data be curated and preserved?_
The described policy should reflect the current state of Consortium Agreements
regarding data management and be consistent with those referring to
exploitation and protection of results. The data management section can be
considered also as a checklist for the future and as a reference for the
resource and budget allocations related to data management.
Data Management Plans (DMPs) are introduced in the _Horizon 2020_ Work
Programs according to the following concept: “ _A further new element in
Horizon 2020 is the use of Data Management Plans (DMPs) detailing what data
the project will generate, whether and how it will be exploited or made
accessible for verification and reuse, and how it will be curated and
preserved. The use of a Data Management Plan is required for projects
participating in the Open Research Data Pilot. Other projects are invited to
submit a Data Management Plan if relevant for their planned research”._
Projects taking part in the _Pilot on Open Research Data_ are required to
provide a first version of the DMP as an early deliverable within the first
six months of the respective project. Projects participating in the above
Pilot as well as projects who submit a DMP on a voluntary basis because it is
relevant to their research should ensure that this deliverable is mentioned in
the proposal. Since DMPs are expected to mature during the corresponding
project, more developed versions of the plan can be included as additional
deliverables at later stages. The purpose of the DMP is to support the data
management life cycle for all data that will be collected, processed or
generated by the project. References to research data management are included
in Article 29.3 of the _Model Grant Agreement_ (article applied to all
projects participating in the _Pilot on Open Research Data in Horizon 2020_ .
A _**Data Management and Sharing Plan** _ 102 is usually submitted where a
project -or a proposal- involves the generation of datasets that have clear
scope for wider research use and hold significant long-term value 103 . In
short, plans are required in situations where the data outputs “form a
resource” from which researchers and other users would be able to generate
additional benefits. This would include all projects where the primary goal is
to create a database resource. It would also include other research generating
significant datasets that could be shared for added value - for example, those
where the data has clear utility for research questions beyond those that the
data generators are seeking to address. In particular, it would cover datasets
that might form "community resources" as defined by the _ Fort Lauderdale
Principles 1 04 _ and th e _Toronto statement 1 _ _ 05 _ . As noted in
the _Toronto statement_ , community resources will typically have the
following attributes: (i) Largescale (requiring significant resources over
time); (ii) broad utility; (iii) creating reference datasets, and;
(iv)associated with community buy-in. For studies generating small-scale and
limited data outputs, a data management and sharing plan will not normally be
required. Generally, the expected approach for projects of this type would be
to make data available to other researchers on publication, and where possible
to deposit data in appropriate data repositories in a timely manner. While a
formal data management and sharing plan need not be submitted in such cases,
applicants may find the guidance below helpful in planning their approaches
for managing their data.
102
See, for example: “ _Guidance for researchers: Developing a data management
and sharing plan”._ Available at: __http://www.wellcome.ac.uk/About-
us/Policy/Spotlight-issues/Data-sharing/Guidance-for-researchers/index.htm_ _
.
103
Also see: Framework for creating a data management plan, ICPRS, University
of Michigan, US. Available at:
__http://www.icpsr.umich.edu/icpsrweb/content/datamanagement/dmp/framework.htm_
_ .
104
For more related information, see: __http://www.wellcome.ac.uk/About-
us/Publications/Reports/Biomedicalscience/WTD003208.htm_ _ .
105
Toronto International Data Release Workshop Authors (2009): _Nature_ _461,_
168-170 (September 10, 2009) [ __doi:10.1038/461168a_ _ ]. Available at :
__http://www.nature.com/nature/journal/v461/n7261/full/461168a.html_ _ .
## 5.3 Principles and Guidelines for Developing a DMP
A DMP as a document outlining how research data will be handled during a
research project, and after it is completed, is very important in all aspects
for projects participating in the H _orizon 2020 Open Research Data Pilot_ as
well as almost any other research project. Especially where the project
participates in the above mentioned Pilot, it should always include clear
descriptions and rationale for the access regimes that are foreseen for
collected data sets 106 .
This principle is further clarified in the following paragraph of the Model
Grant Agreement: “ _As an exception, the beneficiaries do not have to ensure
open access to specific parts of their research data if the achievement of the
action's main objective, as described in Annex I, would be jeopardised by
making those specific parts of the research data openly accessible. In this
case, the data management plan must contain the reasons for not giving
access”._
A DMP describes the data management life cycle for all data sets that will be
collected, processed or generated by the corresponding research project. It is
a document outlining how research data will be handled during a research
project, and even after the project is completed, describing what data will be
collected, processed or generated and following what methodology and
standards, whether and how this data will be shared and/or made open, and how
it will be curated and preserved 107 . The DMP is not a “fixed” document;
actually, it evolves and gains more precision and substance during the
lifespan of the project 108 .
The first version of the DMP is expected to be delivered within the first 6
months of the respective project. This DMP deliverable should be in compliance
with the template provided by the Commission, as presented in the following
_Section 5.3.1_ . More elaborated versions of the DMP can be delivered at
later stages of the related project. The DMP would need to be updated at least
by the mid-term and final review to fine-tune it to the data generated and the
uses identified by the consortium since not all data or potential uses are
clear from the start. New versions of the DMP should be created whenever
important changes to the project occur, due to inclusion of new data sets,
changes in consortium policies or external factors. Suggestions for additional
information in these more elaborated versions are provided below in the
subsequent _Section 5.3.2_ **.**
DMPs should follow relevant national and international recommendations for
best practice and should be prepared in consultation with relevant
institutional and disciplinary stakeholders. They should anticipate
requirements throughout the research activity, and should be subject to
regular review and amendment as part of normal research project management.
### 5.3.1 Template for DMP
The purpose of the Data Management Plan (DMP) is to provide an analysis of the
main elements of the data management policy that will be used by the
applicants with regard to all the datasets that will be generated by the
related project 109 . The DMP is not a fixed document, but evolves during
the lifespan of the corresponding project.
The DMP should address 110 the points below on a dataset by dataset basis
and should reflect the current status of reflection within the consortium
about the data that will be produced 111 .
▪ **Data set reference and name**
106
UK Data Archive (2011, May): _Managing and Sharing Data. Best Practice for
Researchers_ . University of Essex, UK. Available at : __http://www.data-
archive.ac.uk/media/2894/managingsharing.pdf_ _ .
107
Brunt, J. (2011): _How to Write a Data Management Plan for a National Science
Foundatio_ n (NSF) Proposal. Available at:
__http://intranet2.lternet.edu/node/3248_ _ . 108
Support on research data management for projects funded under _Horizon 2020_
has been planned through projects funded under the _Research Infrastructures
Work Programme 2014-15_ .
109
An interesting conceptual approach is also proposed in: Donnelly, M. & Jones,
S. (2011): _DCC Checklist for a Data Management Plan_ v3.0. Digital Curation
Centre (DCC), UK. Available at : __http://www.dcc.ac.uk/webfm_send/431_ _ .
110
Also see: Jones, S. (2011). “How to Develop a Data Management and Sharing
Plan”. _DCC How-to Guides._ Edinburgh: Digital Curation Centre. Available
online : __http://www.dcc.ac.uk/resources/how-guides_ _ .
111
See the detailed approach proposed in:
__http://www.icpsr.umich.edu/icpsrweb/content/datamanagement/dmp/elements.html_
_ .
Identifier for the data set to be produced.
**Data set description**
Description of the data that will be generated or collected, its origin (in
case it is collected), nature and scale and to whom it could be useful, and
whether it underpins a scientific publication. Information on the existence
(or not) of similar data and the possibilities for integration and reuse
should also be included. Plans should cover all research data expected to be
produced as a result of a project or activity, from “raw” to “published”. They
may include, _inter-alia_ , details of: (i) An analysis of the gaps identified
between the currently available and required data for the research; (ii)
anticipated data volume; (iii) anticipated data type and formats including the
format of the final data; (iv) measures to assure data quality; (v) standards
(including metadata standards) and methodologies that will be adopted for data
collection and management, and why these have been selected; (vi) relationship
to data available from other sources, _and_ ; (vii) anticipated
further/secondary use(s) for the completed dataset(s).
A **survey of existing data relevant to the project** and a discussion of
whether and how these data will be integrated, is also an important part of
the related action.
**Formats** in which the data will be generated, maintained, and made
available, including a justification for the procedural and archival
appropriateness of those formats, are also a very important part of DMP.
▪ **Standards and metadata**
Reference to existing suitable standards of the discipline. If these do not
exist, an outline on how and what metadata will be created.
What disciplinary norms are to be adopted in the project? What is the data
about? Who created it and why? In what forms it is available? Metadata answers
such questions to enable data to be found and understood, ideally according to
the particular standards of the project-specific scientific discipline.
DMPs should specify the principles, standards and technical processes for data
management, retention and preservation that will be adopted. These may be
determined by the area of research and/or funder requirements. Processes
should be supported by appropriate standards addressing confidentiality and
information security, legal compliance, monitoring and quality assurance, data
recovery and data management reviews where suitable. In order to maximise the
potential for re-use of data, where possible, researchers should generate and
manage data using existing widely accepted formats and methodologies.
DMPs should provide suitable quality assurance concerning the extent to which
“raw” data may be modified. Where “raw” data are not to be retained, the
processes for obtaining “derived” data should be specified and conform to the
accepted procedures within the research field.
Researchers should ensure that appropriately structured metadata, using a
recognised or _de facto_ standard schema where these exist, describing their
research data are created and recorded in a timely manner. The metadata should
include information about regulatory and ethical requirements relating to
access and use. Protocols for the use, calibration and maintenance of
equipment, together with associated risk assessments, should be clearly
documented to ensure optimal performance and research data quality. Where
protocols change, they should be version controlled and the current version
should be available and readily accessible. Documentation may include:
Technical descriptions, code commenting; project-build guidelines; audit trail
supporting technical decisions; resource metadata. Not all types of
documentation will be relevant to all projects and the quantity of
documentation proposed should be proportionate to the anticipated value of the
data.
▪ **Data access and sharing**
Description of how data will be shared, including access procedures, embargo
periods (if any), outlines of technical mechanisms for dissemination and
necessary software and other tools for enabling re-use, and definition of
whether access will be widely open or restricted to specific groups.
Identification of the repository where data will be stored, if already
existing and identified, indicating in particular the type of repository
(institutional, standard repository for the discipline, etc.) 51 .
In case the dataset cannot be shared, the reasons for this should be mentioned
(e.g. ethical, rules of personal data, intellectual property, commercial,
privacy- _related_ , security- _related_ ).
By default as much of the resulting data as possible should be archived as
_Open Access_ . Therefore, legitimate reasons for not sharing resulting data
should be explained in the DMP.
Planning for data sharing should begin at the earliest stages of project
design and well in advance of beginning the research. Any potential issues
which could limit data sharing should be identified and mitigated from the
outset. Data management plans should therefore address how the research data
will be shared. Any reason for not eventually sharing data should be explained
with a justification citing for example legal, ethical, privacy or security
considerations.
* The **audience** refers to the potential secondary users of the data.
* The **selection and retention periods** refer to a description of how data will be selected for archiving, how long the data will be held, and plans for eventual transition or termination of the data collection in the future.
* **Security** implicates for a description of technical and procedural protections for information, including confidential information, and how permissions, restrictions, and embargoes will be enforced.
* **Responsibility** implicates for names of the individuals responsible for data management in the research project.
* **Intellectual Property Rights** refer to entities and/or persons who will hold the intellectual property rights to the data, and how IP will be protected if necessary. Any copyright constraints (e.g., copyrighted data collection instruments) should be noted.
* **Archiving and preservation (including storage and backup)**
Description of the procedures that will be put in place for long-term
preservation of the data. Indication of how long the data should be preserved,
what is its approximated end volume, what the associated costs are and how
these are planned to be covered.
Funding bodies are keen to ensure that publicly funded research outputs can
have a positive impact on future research, for policy development, and for
societal change. They recognise that impact can take quite a long time to be
realised and, _accordingly_ , expect the data to be available for a suitable
period beyond the life of the project. It has to be pointed out that it is not
simply enough to ensure that the bits are stored, but also to consider the
usability of the project-specific data. In this respect, it has to be
considered to preserve software or any code produced to perform specific
analyses or to render the data as well as being clear about any proprietary or
open source tools that will be needed to validate and use the preserved data.
Data management plans should provide for all retained data and related
materials to be securely preserved in such a way as to allow them to be
accessed, understood and used by any others having appropriate authorization
in future.
Data held electronically should be backed up regularly and duplicate copies
held in alternative locations in a secure and accessible format where
appropriate.
* **Ethics and privacy** refer to a discussion of how informed consent will be handled and how privacy will be protected, including any exceptional arrangements that might be needed to protect participant confidentiality, and other ethical issues that may arise.
* **Budget** correlates to the costs of preparing data and documentation for archiving and how these costs will be paid. Requests for funding may be included.
* **Data organization** refers to how the data will be managed during the project, with information about version control, naming conventions, etc.
* **Quality Assurance** refers to the corresponding and well-defined procedures for ensuring data quality during the related project.
* **Legal requirements** contains as listing of all relevant federal or funder requirements for data management and data sharing.
### 5.3.2 Additional Guidance for DMP
This can be applied to any project that produces, collects or processes
research data, and is included as reference for elaborating DMPs in _Horizon
2020_ projects. This guide is structures as a series of questions that should
be ideally clarified for all datasets produced in the project.
Scientific research data should be easily:
##### 1\. Discoverable
DMP question: Are the data and associated software produced and/or used in the
project discoverable (and readily located), identifiable by means of a
standard identification mechanism (e.g. Digital Object Identifier)?
##### 2\. Accessible
DMP question: Are the data and associated software produced and/or used in the
project accessible and in what modalities, scope, licenses 52 (e.g.
licensing framework for research and education, embargo periods, commercial
exploitation, etc.)?
##### 3\. Assessable and intelligible
DMP question: Are the data and associated software produced and/or used in the
project assessable for and intelligible to third parties in contexts such as
scientific scrutiny and peer review (e.g. are the minimal datasets handled
together with scientific papers for the purpose of peer review, are data is
provided in a way that judgments can be made about their reliability and the
competence of those who created them)?
##### 4\. Useable beyond the original purpose for which it was collected
DMP question: Are the data and associated software produced and/or used in the
project useable by third parties even long time after the collection of the
data (e.g. is the data safely stored in certified repositories for long term
preservation and curation; is it stored together with the minimum software,
metadata and documentation to make it useful; is the data useful for the wider
public needs and usable for the likely purposes of non-specialists)?
##### 5\. Interoperable to specific quality standards
DMP question: Are the data and associated software produced and/or used in the
project interoperable allowing data exchange between researchers,
institutions, organizations, countries, etc. (e.g. adhering to standards for
data annotation, data exchange, compliant with available software
applications, and allowing recombinations with different datasets from
different origins)?
## 5.4 Structuring of a 5G ESSENCE DMP
Different types of data raise very different considerations and challenges,
and there are significant differences between fields in terms of, for example,
the availability of repositories and level of established good practice for
data sharing.
Data generated by the Project will mostly consist of measurement- and traffic
data from various simulations, emulations in the CESC platform, and the proof
of concept (PoC) experimentation in the 5G ESSENCE test-bed(s) and related
scenarios of use.
Without going into full details of the DMP here, there are several standards
that can be used to store such data as well as providing the meta-data
necessary for third parties to utilise the data.
The overall goal is to as much as possible, use not only open formats to store
the data but also open source software to provide the scripts and other meta-
data necessary to re-use it.
Similar to the software generated by the Project, some of the data generated
may pertain to components, software, or figures considered as confidential by
one or more of the partners.
The particular data affected by this will be described in the DMP and the
reasons for maintaining confidentiality will be provided.
According to the discussion provided in the previous _Section 5.3_ , a
suitable Data Management Plan (DMP) includes the following major components,
as shown in _**Figure 4** _ , below:
**Figure 4:** Structure of a Data Management Plan (DMP)
For the case of the 5G ESSENCE Project, the context becomes as it appears in
_**Figure 5** _ , below:
**Figure 5:** Essential Components of the 5G ESSENCE Data Management Plan
(DMP)
In the following _Sections 5.4.1-5.4.5_ we discuss, one-by-one, the essential
characteristics -or “modules”- of the 5G ESSENCE DMP, based on the concept of
_**Figure 5** _ .
### 5.4.1 Data Set Reference and Naming
The following structure is proposed for 5G ESSENCE data set identifier:
_5G ESSENCE [Name]_[Type]_[Place]_[Date]_[Owner]_[Target User]_
Where we identify the following fields:
* _“Name”_ is a short name for the data.
* _“Type”_ describes the type of data (e.g. code, publication, measured data).
* _“Place”_ describe the place the data were produced.
* _“Date”_ is the date in format “YYYY-MM-DD”.
* “Owner” is the owner or the owners of the data (if exist)
* _“Target user”_ is the target audience of the data (this is an optional identifier).
* _“_” (underscore)_ is used as the separator between the fields.
For example, _“5G
ESSENCE_Field_Experiment_data_Athens_2017-10-31_OTE_Internal.dat”_ is a data
file from a field experiment in Athens, Greece that has been performed on
2017-10-31 and owned by the project partner (project coordinator) OTE with
extension .dat (MATLAB 53 ). More information about the data is provided in
the metadata (see the following section).
All the data fields in the identifier above, apart from the target user, are
mandatory. If one -or more owners- owner cannot be specified, then it should
be indicated as: _“Unspecified-owner”._
### 5.4.2 Data Set Description and Metadata
The previous _Section 5.4.1_ has defined a data set identifier. The data set
description is fundamentally an expanded description of the identifier with
more details.
The data set description that is organized as the metadata takes place in a
similar way as the case of the identifier, but with more details and,
depending on the file format, it will be either incorporated as a part of the
file or as a separate file (in its simplest form) in the text format. In the
case of the separate metadata file, it will have the same name with the added
suffix _“METADATA”._
For example, the metadata file name for the data file from the previous
section will appear as follows: _“5G
ESSENCE_Field_Experiment_data_Athens_2017-10-31_OTE_Internal_METADATA.txt”_ .
The Metadata file can also designate a number of files (e.g. a number of log
files). The 5G ESSENCE Project may thus consider a possibility to provide the
metadata in XML 54 or JSON 55 formats, if necessary for convenience of
parsing and further processing. The Project will develop several data types
related to the VNF (Virtual Network Function) Descriptors, NS (Network
Service) Descriptors, VNF Catalogues, etc., which will be specifically encoded
into the metadata format appropriately in order to have consistency in the
description and filtering of the data types.
The Project intends to share the datasets in an internally accessible
disciplinary repository, using descriptive metadata as required/provided by
that repository. Additional metadata to example test datasets can be offered
within separate XML-files.
Metadata can be made available in XML and JSON format. Keywords can be added
as notations in UML 56 and shall be modelled upon appropriate
specifications. The content can be similar to relevant data from compatible
devices and network protocols. Files and folders will be versioned and
structured by using a name convention consisting of project name, dataset
name, date, version and ID ( _see Section 5.4.1_ ).
All datasets are to be shared between the participants during the lifecycle of
the Project. Feedback from other participants and test implementations will
decide when the dataset should be made publicly available. When the datasets
support the framework defined by the 5G ESSENCE ontology, they will be made
public and presented in open access publications.
The 5G ESSENCE partners can use a variety of methods for exploitation and
dissemination of the data including:
* Using data in further research activities;
* Developing, creating or marketing a product or process;
* Creating and providing a service, or; • Using data in standardisation activities.
### 5.4.3 Data Sharing
5G ESSENCE will use the _zenodo.org_ repository for storing the related
Project data and a 5G ESSENCE account will be created for that purpose.
_Zenodo.org_ is a repository supported by CERN and the EU OpenAire project 57
; this is open, free, searchable and structured with flexible licensing
allowing for storing all types of data: datasets, images, presentations,
publications and software.
Researchers working for European funded projects can participate by depositing
their research output in a repository of their choice 58 , 59 publish in a
participating Open Access journal, or deposit directly in the OpenAIRE
repository _Zenodo_ \- and indicating the project it belongs to in the
metadata 60 .
Dedicated pages per project are visible on the OpenAIRE portal. Project-
_based_ research output, whether it is publications, datasets or project
information is accessible through the OpenAIRE portal. Extra functionalities
are also offered too, such as statistics, reporting tools and widgets – making
OpenAIRE a useful support service for researchers, coordinators and project
managers. On this portal, each project has a dedicated page featuring: (i)
Project information; (ii) App. & Widget box 61 ; (iii) Publication list;
(iv) Datasets, _and_ ; (v) Author information.
In addition to that we also identify the following beneficial features:
* The repository has backup and archiving capabilities.
* The repository allows for integration github.com where the Project code will be stored. GitHub provides a free and flexible tool for code developing and storage.
* _Zenodo_ assigns all publicly available uploads a Digital Object Identifier (DOI) to make the upload easily -and uniquely- citable.
All the above features make _Zenodo_ a good candidate as a _unified_
repository for all foreseen Project data (presentations, publications, code
and measurement data) coming from 5G ESSENCE.
Information on using _Zenodo_ by the Project partners with application to the
5G ESSENCE data will be circulated within the consortium and addressed within
the respective work package (WP8). The process of making the 5G ESSENCE data
public and publishable at the repository will follow the procedures described
in the 5G ESSENCE Consortium Agreement.
For the code, the Project partners will follow the internal _“Open Source
Management Process”_ document. All the public data of the Project will be
openly accessible at the repository. Non-public data will be archived at the
repository using the “closed access” option.
### 5.4.4 Archiving and Preservation
The _Guidelines on Data Management in Horizon 2020_ require defining
procedures that will be put in place for long-term preservation of the data
and backup. The _Zenodo.org_ repository possesses these archiving capabilities
including backup and will be used to archive and preserve the 5G ESSENCE
Project data.
Further, the 5G ESSENCE Project data will also be stored in a project-managed
repository tool, called as _Sharepoint_ 62 _,_ which is managed by the
Project Coordinator. It has flexible live data storage capability. This
repository will directly link to the project website, where access information
to different data types can be provided. This will permit the users and
research collaborators to have easy and convenient access to the Project
research data.
### 5.4.5 Use of DMP within the Project
The 5G ESSENCE Project partners will use this plan as a reference for data
management (naming, providing metadata, storing and archiving) within the
project each time new project data are produced.
The 5G ESSENCE partners are introduced to the DMP and its use as part of WP8
activities. Relevant questions from partners will also be addressed within
WP8. The work package will also provide support to the project partners on
using _Zenodo_ as the data management tool. The DMP will be used as a live
document in order to update the project partners about the use, monitoring and
updates of the shared infrastructure.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1290_blueSPACE_762055.md
|
# Executive Summary
This deliverable is D1.7 Data Management Plan of the blueSPACE project.
BlueSPACE addresses the challenges of increased capacity, scalability,
manageability, resource allocation of 5G networks by adopting a pragmatic yet
disruptive and future proof approach based on space division multiplexing in
the radio access network. BlueSPACE participates in the Open Research Data
Pilot and as a result specifies its approach to research data management in
this Data Management Plan (DMP).
The purpose of this document is to provide an initial description of the types
of data that will be generated and collected during the project and to lay out
the blueSPACE strategy for management and access regulation of this data.
Specifically, this initial version DMP describes the data management life
cycle for the datasets that have so far been identified and will be collected,
processed and/or generated by a research project. It covers:
* the handling of research data during and after the project;
* what data will be collected, processed or generated;
* what methodology and standards will be applied;
* whether data will be shared/made open and
* how data will be curated and preserved.
This document introduces the first version of the project Data Management
Plan. Following the EU’s guidelines regarding the DMP, this document may be
updated – if appropriate – during the project lifetime (in the form of
deliverables). The data management plan will cover all the data life cycle.
_Figure 1.1: Steps in the data life cycle [1]_
# Introduction
Data management is an important component of the responsible conduct of
research. Thus, there is a need for a document to describe and define how data
will be collected, stored and preserved, managed and shared. This document,
which serves this purpose, is a Data Management Plan.
Horizon 2020 projects that are part of the Open Research Data Pilot are
required to evaluate the possibilities of making the research data accessible
via open access, following the motto “As open as possible, as closed as
necessary.” It is the aim to provide to end users free of charge online access
to project scientific information and outcomes from design and research
efforts of the consortium where deemed possible and feasible and as described
in this DMP.
Developing a DMP, the description of what data will be open access and how
these data will be shared and preserved will be introduced.
The aim of the current DMP is to describe blueSPACE’s strategy towards
research data management and to make the blueSPACE data easily:
* Discoverable
* Intelligible
* Accessible
* Useable
* Interoperable
First for the partners within the project and, second, for the wider research
community, related industries and the general public, where the latter is
deemed desirable and feasible, while maintaining the interests of the project
consortium with respect to, e.g., intellectual property (IP) protection and
commercial exploitation of project results.
The blueSPACE project participates in the Pilot on Open Research Data. The
consortium follows the principles of the concept of open science, therefore
all data produced by the project will be evaluated for the possibility to be
published under open access rules, in line with the principles described in
this document.
The blueSPACE data management plan is structured into the following sections:
section 2 discusses the structure of the DMP and basic strategy for data
management in blueSPACE, section 3 focusses on what part of the outputs of
blueSPACE can be made open access and the strategy to evaluate the latter,
section 4 discusses IPR and knowledge management related issues, while section
5 considers ethical aspects. Finally, section 7 concludes and summarizes the
blueSPACE DMP.
# Research Data Management within blueSPACE
This section describes the structure of the blueSPACE research data management
plan (DMP) and identifies the main considerations for research data management
within blueSPACE. After introducing the general structure, it discusses the
types of data have been identified as possible outputs of blueSPACE, lays out
the basic naming conventions and metadata standards for generated data and
considers archiving, preservation and data sharing. Finally, it discusses the
intended use of the DMP within the project and across the project duration.
## Structure of the blueSPACE DMP
The structure of DMP follows the parameters to be clarified regarding the
management of the project’s generated data. Following the template recommended
by the EC [2], the Data Management Plan (DMP) includes the following major
components, as described in Figure 2.1:
* Data set reference and name
* Data set description
* Standards and metadata
* Data sharing
* Archiving and preservation
These parameters and blueSPACE’s strategy for addressing them are discussed in
the following. It is the responsibility of each consortium partner to ensure
the data generated by the partner is treated according to the details laid out
in this DMP.
Data Management plan
Archiving and
preservation
Data sharing
Standards and
metadata
Data set
description
Data set reference
and name
_Figure 2.1 blueSPACE Data Management Plan structure_
## Data types expected to be generated by blueSPACE
The information and data that is expected to be generated by the consortium of
the blueSPACE project includes the following four types of data:
1. Human readable documents: project documentation, research documentation, publications etc.
[typical office document formats, e.g., predominantly in the ‘Portable
Document Format’ (PDF)]
2. Design documents for architecture and hardware components [design software specific formats]
3. Source code and software binaries: for implementation of functionalities, support of experimentation and simulation [plain text, binary software formats]
4. Experimental data: raw data from experiments and processed statistical data [binary raw data storage formats, e.g., ‘Hierarchical Data Format’ (h5)]
These four data types are considered as distinct classes of data, with
different requirements towards their description and metadata, their
requirements for archiving and their potential to be shared via open access.
The following sections will consider these different types and separately
discuss them with regards to the aforementioned requirements.
## Data Set Reference and Name
Several data types will be generated within the blueSPACE project with
different identified data sets and requirements. The data sets of all data
types however will follow a common overall naming scheme. The suggested naming
scheme is:
BlueSPACE_[Type]_[Name]_[Date]_[Partner]_[Version], where
* [Name] is a short and characteristic name for the data
* [Type] is the type of data (code, publication, measured data etc.)
* [Date] is the date when data was produced (format: YYYYMMDD)
* [Partner] is the name of the organization associated with the dataset; specifically, in blueSPACE this can take the following values:
* TUE for Eindhoven University of Technology
* AIT for Athens Information Technology
* UPV for Universidad Politecnica de Valencia
* CTTC for Centre Tecnològic de Telecomunicacions de Catalunya
* UC3M for Universidad Carlos III de Madrid
* ADVA for ADVA Optical Networking SE
* THALES for THALES SA
* OPT for OPTOSCRIBE Limited
* LIONIX for Lionix International
* OP for ORANGE Poland
* ICOM for Intracom SA Telecom Solutions
* EULAMBIA for Eulambia Advanced Technologies
* NXW for Nextworks
* OTE for Hellenic Telecommunications Organisation SA
* [Version] is the numbering of versions of the data
* _ (underscore) is used as the separator between the fields
For example, the following name of a potential data set:
_BlueSPACE_Configuration_code_20190831_AIT_v2_
would identify a data set of the type code, which issued to configure a
software element. The data set is generated on 30-08-2019 by AIT (which is
thus the owner of the data set) and the current data set is the second
version.
## Data Set Description and Metadata Standards
A data set for the purpose of this DMP is considered to consist of one or
multiple files and associated metadata as well as a description of the data
set. A data set may combine files of multiple of the data types defined in
section 2.2. The description and metadata of the dataset serve the purpose of
identification, description and guidance for use and will at least contain the
following information, fulfilling the requirements of the DataCite [3] and
DublinCore [4] metadata schemas/standards:
* Identifier(s): an identifier as per section 2.3 plus, optionally, additional unique identifiers (e.g., DOI
[5] for publicized data sets, a deliverable number of project deliverables
etc.)
* Creator/Author: name(s) and affiliation(s)
* Title: title of the data set
* Publisher: the entity publishing the data set
* Date: the year and month of publication, optionally the day and time of day
* Type: the dataset type, as defined in Table 2.1
* Format: the format(s) of the included data
* Description: a description of the data set, including description of its origin and intended use, as well as further description as per Table 2.1
Five types of data set have been identified that are expected to be generated
within blueSPACE and are summarized with their typical included data types, a
short description, the required additional metadata as well as typical access
options in Table 2.1. Additional data set types or requirements may be added
over the course of the project as they are identified and defined.
_Table 2.1 blueSPACE data set definitions and description_
<table>
<tr>
<th>
**Data Set Type**
</th>
<th>
**Data**
**Types**
**(typical)**
</th>
<th>
**Description**
</th>
<th>
**Required information (in addition to above)**
</th>
<th>
**Release***
</th> </tr>
<tr>
<td>
Project Data
</td>
<td>
(i)
</td>
<td>
Financial and administrative data, partner information, periodic reports,
contracts etc. Typically, confidential to the consortium plus the European
Commission; certain data sets may be private to single partners or subsets of
partners.
</td>
<td>
none
</td>
<td>
PR, CO
</td> </tr>
<tr>
<td>
Reports, dissemination and communication
</td>
<td>
(i)
</td>
<td>
Reports, deliverables, presentations, publications and communications, incl.
the project website and social media. Typically, public or where required
confidential to the consortium.
</td>
<td>
\- short executive summary/abstract
</td>
<td>
PU, CO
</td> </tr>
<tr>
<td>
Design data
</td>
<td>
(i), (ii)
</td>
<td>
Software, hardware, system and network designs and related documentation.
Typically, confidential to the consortium, may be public in special cases or
private to a single partner in exceptions.
</td>
<td>
\- intended use - design goal
</td>
<td>
CO, PU, PR
</td> </tr>
<tr>
<td>
Raw research data
</td>
<td>
(i), (iii), (iv)
</td>
<td>
Research data from simulations and experiments. May include relevant
software/core for simulation, instrumentation and/or analysis.
Typically, confidential to the consortium,
open access where possible and desirable. May be private to a single partner
in some cases
</td>
<td>
* intended use - data format and layout
* experimental/ simulation setup
</td>
<td>
CO, PU, PR
</td> </tr>
<tr>
<td>
Analysed research data
</td>
<td>
(i), (ii), (iii),
(iv)
</td>
<td>
Processing and analysis results based on raw research data, including reports
and findings. Further includes models, statistical analysis and analytical
studies. Typically, confidential to the consortium,
open access where possible and desirable.
</td>
<td>
\- intended use - short executive summary/abstract - relation to relevant raw
research data
</td>
<td>
CO, PU
</td> </tr> </table>
* PR: private to a consortium partner, CO: confidential within the consortium or a specified audience, PU: public
## Data Sharing and Preservation
The sharing and preservation of data sets generated in blueSPACE are highly
dependent on the data set type. A priori it is within the responsibility of
each consortium partner to maintain and preserve the data generated at their
side. A collaboration and sharing platform (SharePoint) has been set up to
allow collaborative creation and sharing of data sets containing predominantly
human readable documents (type (i)). The data sets deposited and created on
this sharing platform are stored and preserved by the coordinator.
Additional requirements and strategies for sharing and preservation of some
data set types or more specific groups of data sets with specific
circumstances have been agreed and are summarized in Table 2.2.
_Table 2.2 Sharing and preservation of certain data set types_
<table>
<tr>
<th>
**Data Set Type**
</th>
<th>
**Description**
</th>
<th>
**Release***
</th>
<th>
**Access**
</th>
<th>
**Sharing**
</th>
<th>
**Preservation**
</th>
<th>
**Responsible**
</th> </tr>
<tr>
<td>
Project Data
</td>
<td>
All shared project data
</td>
<td>
CO
</td>
<td>
Closed
</td>
<td>
SharePoint
</td>
<td>
SharePoint
</td>
<td>
Coordinator
</td> </tr>
<tr>
<td>
Reports, dissemination and communication
</td>
<td>
Project deliverables
</td>
<td>
CO
</td>
<td>
Closed
</td>
<td>
SharePoint, EC participant portal
</td>
<td>
EC participant portal
</td>
<td>
Deliverable editors & coordinator (sharing), coordinator (preservation)
</td> </tr>
<tr>
<td>
Reports, dissemination and communication
</td>
<td>
Project deliverables
</td>
<td>
PU
</td>
<td>
Open
</td>
<td>
SharePoint, EC participant portal, project website
</td>
<td>
EC participant portal
</td>
<td>
Deliverable editors & coordinator (sharing), coordinator (preservation)
</td> </tr>
<tr>
<td>
Reports, dissemination and communication
</td>
<td>
Publications
</td>
<td>
PU
</td>
<td>
Open
(Closed)
</td>
<td>
Publisher, Institutional repository
</td>
<td>
Publisher, Institutional repository
</td>
<td>
Authors, coordinator
</td> </tr>
<tr>
<td>
Reports, dissemination and communication
</td>
<td>
Project reports
</td>
<td>
PU
</td>
<td>
Open
</td>
<td>
Project website, social media
</td>
<td>
\-
</td>
<td>
Authors, coordinator
</td> </tr>
<tr>
<td>
Reports, dissemination and communication
</td>
<td>
Project communications
</td>
<td>
PU
</td>
<td>
Open
</td>
<td>
Project website, print media, social media
</td>
<td>
\-
</td>
<td>
Coordinator
</td> </tr>
<tr>
<td>
Design data
</td>
<td>
design data in open access
</td>
<td>
PU
</td>
<td>
Open
</td>
<td>
Zenodo, online sharing platforms
</td>
<td>
Zenodo
</td>
<td>
Authors/Creators
</td> </tr>
<tr>
<td>
Raw research data
</td>
<td>
open access research data
</td>
<td>
PU
</td>
<td>
Open
</td>
<td>
Zenodo, online sharing platforms
</td>
<td>
Zenodo
</td>
<td>
Authors/Creators
</td> </tr>
<tr>
<td>
Analysed research data
</td>
<td>
open access research data
</td>
<td>
PU
</td>
<td>
Open
</td>
<td>
Zenodo, online sharing platforms
</td>
<td>
Zenodo
</td>
<td>
Authors/Creators
</td> </tr> </table>
* PR: private to a consortium partner, CO: confidential within the consortium or a specified audience, PU: public
## Use of DMP within the project
The blueSPACE project partners will use this DMP as a reference for data
management (naming, providing metadata, storing and archiving) within the
project any time new data is generated within the project. It is within each
project partner’s responsibility that data generated and handled within the
project is treated according to the provisions of this DMP. The project
partners are introduced to the DMP and its use as part of WP1 activities.
Relevant questions and concerns from partners will also be addressed within
WP1.
The DMP will not only be a static reference, but is intended as a dynamic,
living and developing guideline, continuously adapted to reflect the data
types and data sets generated. It will thus be updated from its creation to
the end of blueSPACE project whenever required.
# FAIR data
BlueSPACE Consortium follows the principle of FAIR data and so this section of
the Data Management Plan describes how blueSPACE ensures its data is findable,
accessible, interoperable and reusable. This applies to, first, the partners
within the project and, second, for the wider research community, related
industries and the general public, where the latter is deemed desirable and
feasible, while maintaining the interests of the project consortium with
respect to, e.g., intellectual property (IP) protection and commercial
exploitation of project results.
## Making data findable and openly accessible
In order to make blueSPACE data findable, standard dataset names and full
metadata and descriptions will be provided as it is described in section 2.
The use of a standardized metadata set allows full indexing and easy search
for available data sets. Additionally, search keywords and clear version
numbers will be provided to optimize possibilities for re-use.
blueSPACE data sets will be evaluated if open access is desirable and can be
provided. Where such evaluation is positive, data sets will be made available
online via research data sharing platforms, with Zenodo [6] identified as the
primary option. In addition to the standard metadata, Zenodo includes
assignment of a DOI [5], sophisticated versioning and licensing/access
control.
## Data Sharing
Data sharing in blueSPACE comes in multiple fashions. First, human readable
data that is generated through the project and released to public access will
be shared through project website. This website permits the users and research
collaborators to have easy and convenient access to the project communications
and documentation and together with the project social media activities serves
as main instrument for outreach beyond the research community.
Second, all data sets of the reports, dissemination and communication, design
data, raw research data and analyzed research data types will be evaluated if
open access is desirable and can be provided. Where such access can be
provided it will be provided via online research data sharing platforms as
discussed in the previous section.
The consortium aims for most of the project data to be made openly accessible.
However, sharing and the provision of open access is contingent on that it is
deemed desirable and feasible, while maintaining the interests of the project
consortium with respect to, e.g., intellectual property (IP) protection and
commercial exploitation of project results. If data sets are produced which
according to this evaluation cannot be shared, the reasons will be documented
and these data sets will be preserved by the generating partner in a
repository with limited access.
## Archiving and Preservation
The archiving and preservation of project data is mainly the responsibility of
the consortium partner generating the data, as described in section 2.5. For
data made publicly accessible, different provisions are made within the
consortium.
Data that is made publicly accessible via the project website and social media
is considered to be of relevance only in the short and medium term and will be
maintained by the respective platforms for a few years beyond the project
duration.
Data sets that are scientific publications are archived for open access by the
publishing consortium partner(s) in their respective institutional
repositories or similar; these also serve for preservation for the time beyond
the project duration. Additionally, the publisher, i.e., engineering or
academic society or publishing house, will deposit the publications in their
respective libraries where perpetual online availability is guaranteed.
Research data made publicly available will be deposited on online sharing and
depositing platforms, where storage duration and availability is governed by
the service description of the respective platform. In particular, Zenodo [6]
has been identified as the default sharing platform for any data sets made
publicly available and guarantees storage for at least 20 years, 1 which for
the majority of the research data sets generated in blueSPACE is considered
more than sufficient.
Finally, the main project outcomes and research results contained in the
public project deliverables will be archived by the European Commission via
the participant portal and associated systems.
## Making data interoperable
It is blueSPACE’s aim to make its data interoperable to the maximum degree
possible without impacting the execution of work and research within the
consortium and where possible within reasonable effort. In order to make data
exchange possible, standard formats, vocabularies and document structures will
be applied. Additionally, metadata will be defined as per standards of
DataCite [3]. DataCite is a digital research data metadata standard, developed
with the purpose of accurate and consistent identification of datasets [3].
To further ensure interoperability and reusability of the research data,
blueSPACE has defined a number of additional, data set type dependent,
metadata and descriptive elements to be included for the respective data sets,
as shown in Table 2.1. These will aid interoperability and, combined with the
use of research best practices for data generation and research conception,
will ensure data sets generated within blueSPACE contain all required
information for exploitation by third party researchers.
# IPR and Knowledge Management
## IPR management
All project partners have Intellectual Property Rights (IPR) on their
technologies and data. Since the partners economically rely on their IPR, the
consortium is committed to allow protection of the relevant data and consult
with involved partners before publishing or making publicly accessible and
data sets.
The Consortium Agreement (CA) includes all the information and rules that
project partners take under consideration in order to define the important
points necessary to obtain the best possible management (financial conditions,
IPR, planning) of intellectual property. IPR will be managed in line with a
principle of equality of all the partners towards the foreground knowledge and
in full compliance with the general European Commission policies regarding:
* Ownership,
* exploitation rights and
* confidentiality.
In general, outcomes, innovative ideas, concepts and solutions where
protection by patent application is not sought by the partners and which are
not regarded trade secrets will be made public after agreement between the
partners, to allow others to benefit from these results and exploit them. The
CA is used as a reference for all IPR cases.
The Consortium Agreement provides rules for handling confidentiality and IPR
to the benefit of the consortium and its partners. All the project
documentation will be stored electronically in the project’s collaborative
environment which is maintained by the coordinator. Classified documents will
be handled according to the respective rules with regard to classification,
numbering, locked storage and distribution limitations. The policy, that will
govern the IPR management in the scope of blueSPACE, is driven by the
principles described in the CA.
# Ethical aspects
The blueSPACE project consortium confirms that its actions that were mentioned
in the above chapters are respecting the ethical approach to the use cases and
pilots, conforming the regulations of the user informed consent and privacy
policies and regulations of the EU. The consortium will, where applicable, act
according to the:
* General Data Protection Regulation 2016/679 (Protection of personal data)
* Opinion 23/05/2000 of the European Group on Ethics in Science and New Technologies concerning ‘Citizens Rights and New Technologies: A European Challenge’ and specifically those relating to:
* ICT (Protection of privacy and protection against personal intrusion)
* Ethics of responsibility (Right to information security)
* Article 15 (Freedom of expression and research and data protection)
The project will ensure that the consortium agreement (or addendums thereof)
is constructed to enable such assurances to be formally made and adhered to by
consortium partners.
In addition, with respect to General Data Protection Regulation 2016/679
(Protection of personal of data), individual work packages will be
specifically requested to ensure that any models, specifications, procedures
or products also enable the project end users to be compliant with this
regulation.
BlueSPACE partners also will abide by professional and research ethical best
practices and comply with the Charter of Fundamental Rights of the European
Union [7].
# Conclusion
This Data Management Plan provides an overview of the data that blueSPACE will
produce together with related challenges and constraints that need to be taken
into consideration. The methods described in this report allows the procedures
and infrastructures to be implemented by blueSPACE to efficiently manage the
data that will be produced. Most project partners will be owners or/and
producers of data, which implies specific responsibilities, as described in
this report. The blueSPACE Consortium will put strong emphasis on data
management with specific attention on IPR rules, regulations and good
practices.
_Table 6.1 blueSPACE data management summary_
<table>
<tr>
<th>
**Data Management Plan**
</th>
<th>
**blueSPACE DMP**
</th> </tr>
<tr>
<td>
Data Set Reference and Name
</td>
<td>
BlueSPACE_[Name]_[Type]_[Date]_[Partner]_[Version]
</td> </tr>
<tr>
<td>
Data Set Description
</td>
<td>
* The nature of the data set
* The scale of the data set
* To whom could the data set be useful
* Whether the data set underpins a scientific publication
* Information on the existence (or not) of similar data sets.
* Possibilities for integration with other data sets and reuse
* Relation to other datasets
* Conditions for (re-)use
</td> </tr>
<tr>
<td>
Standards and Metadata
</td>
<td>
Following recognized metadata standards, namely:
* DataCite metadata standard [3]
* Dublin core [4]
</td> </tr>
<tr>
<td>
Data Sharing
</td>
<td>
Data type specific (includes: blueSPACE, EC, and 5G PPP websites and
repositories, consortium partner institutional repositories, publisher
databases, research data sharing platforms)
</td> </tr>
<tr>
<td>
Archiving and Preservation
</td>
<td>
Same as for data sharing where suitable for archiving, especially: consortium
institutional, EC and 5G PPP repositories, publisher databases and Zenodo
</td> </tr> </table>
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1291_Immersify_762079.md
|
# Introduction
Immersify is part of the Open Research Data pilot and therefore has an
obligation to prepare a Data Management Plan (DMP). Proper DMP is an important
part of the project strategy to achieve specific dissemination objectives and
ease the access of the consortium and the general public to the data produced
during the execution of Immersify project. All partners will identify what
kind of data will be generated and how they can be managed. As a general rule,
all datasets will be kept and managed by the partner coordinating the action
and gathering the data. Preservation will be done according to what kind of
content is managed (research publications, video footage, software, etc.) and
the country where the partner coordinating the action belongs.
The repository of documents (e.g. _Nextcloud_ ) will be used to store and
share among the partners all preliminary documentation and materials required
to produce the final versions of publicly available documents covering results
of the research tasks. Additionally, _Git_ , an open source distributed
version control system, will be used for software management and _Seafile_
, file sync and share platform, will serve all the video datasets (raw and
encoded).
As Immersify is an innovation project, only limited research activities will
be conducted, and therefore only a small set of research data will be
produced. In addition, some of the research results will be classified as
confidential and will not be published as open data. In particular, the
results of the optimization of the video codec developed in WP3 “Immersive
Display” and WP4 “Encoding and Streaming” are classified as confidential. The
main reason for this decision is that those results include confidential
information related to commercial products of the companies involved in the
Project
Results produced as part of the work on WP5 “Creative Development and
Demonstration”will be included in public deliverables, and, in particular, the
work performed in Task 5.3 “Quality Assessment and Content Preparation
Guidelines” will include guidelines documents that will will be openly
published and distributed.
The Immersify ensures also the adequate protection of privacy and non-
disclosure or confidentiality agreements for sensitive information regarding
individuals, project partners and external stakeholders that participate in
the project. All private information will be anonymized before release to the
public. Research publications on, e.g. business case studies, will in
particular be required to respect this rule, anonymising individual and/or
company names if requested and/or setting embargo periods (so that
publications appear only when their content is not commercially sensitive).
Beyond that, and in respect of privacy and confidentiality legislation, all
other data generated by project activities will be freely available.
The Immersify Data Management Plan is a living document and will be subject to
change and evolve to be adapted to needs and requirements that may arise
during the project life cycle. As a minimum the document will be updated in
time with the RV1 - First project review (M14) and the final version will be
provided by month 30 (RV2 - Final review).
In the following sections it can be found the ways of data management and
procedures to access to the datasets, both for public and non-public
categories.
# Data Summary
The overall objective of Immersify is to advance the tools required for the
distribution and exhibition of the next generation of immersive media. In
order to satisfy this general goal Immersify will create new research datasets
in scope of the project as well as it will collect existing data from partners
and it will also buy third parties content. All the sets of data can be
divided into four main categories: ● Software tools
* Video content (raw and encoded)
* Deliverables, reports, guidelines and dissemination materials
* Articles and scientific publications
The creative industries targeted in Immersify are those connected to the Film,
TV, Video, Media Art, and related areas. As the project aims to enhance tools
for delivery and exhibition of VR and immersive content, it addresses several
industries across the value chain including: content producers and creators,
developers and users of post-production tools, providers of VoD services,
developers and users of tools for VR devices, and film/cinema and media art
exhibitors.
The table below contains a description of the datasets that partners have
identified during the initial 6 months of project.
<table>
<tr>
<th>
**Type of dataset Format Size Description**
</th> </tr>
<tr>
<td>
**Recorded raw video content**
</td>
<td>
Video (Raw, TIFF, PNG)
</td>
<td>
> 30 TB
</td>
<td>
Different types of high quality immersive and interactive video content (8K,
8K 3D, 8K 360, 16K) will be recorded or captured in different way (e.g. laser
scanning and CG rendering) during the project lifetime. The content will be
used as a test sequences and reference materials for VR and HEVC related
software tools developed in scope of the project.
</td> </tr>
<tr>
<td>
**Encoded video content**
</td>
<td>
Video (.mp4, H.265/ HEVC)
</td>
<td>
~ 1TB
</td>
<td>
Final versions of the raw content encoded with the H.265/HEVC codec with
various parameters and settings.
</td> </tr>
<tr>
<td>
**Making-of videos and pictures**
</td>
<td>
Pictures / video
</td>
<td>
~500GB
</td>
<td>
During the recording sessions, pictures and/or video will be taken in order to
document the whole process itself. This content will be mainly used for guide
preparation as well as, communication, dissemination and reporting purposes.
</td> </tr>
<tr>
<td>
**Articles, presentations**
</td>
<td>
Documents
(.pdf, .pptx)
</td>
<td>
~1GB
</td>
<td>
Different types of communication materials including research articles,
conference publications, press releases and presentations from the
conferences.
</td> </tr>
<tr>
<td>
**Deliverables, reports**
</td>
<td>
Documents
(.pdf)
</td>
<td>
~1GB
</td>
<td>
Documents reporting project progress, achievements, and final results.
</td> </tr>
<tr>
<td>
**Software tools**
</td>
<td>
Source code and binary files
</td>
<td>
~500MB
</td>
<td>
All software tools developed in scope of the project including
encoding/decoding tools, media player, streaming server, etc.
</td> </tr> </table>
**Table 1:** Definition of datasets
All repositories of the datasets used in the project are collected in Table 2.
<table>
<tr>
<th>
**Repository**
</th>
<th>
**Datasets**
</th>
<th>
**Public**
</th> </tr>
<tr>
<td>
**Nextcloud**
</td>
<td>
Deliverables, financial reports, sensitive project data
</td>
<td>
No
</td> </tr>
<tr>
<td>
**Google drive**
</td>
<td>
Preliminary materials, working documents, promotional content, dissemination
materials, meetings summaries
</td>
<td>
No
</td> </tr>
<tr>
<td>
**Web page**
</td>
<td>
Project information, dissemination materials, preview of video content
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
**SeaFile**
</td>
<td>
Raw and encoded video content
</td>
<td>
No (on special request)
</td> </tr>
<tr>
<td>
**Git**
</td>
<td>
Software source code
</td>
<td>
No
</td> </tr> </table>
**Table 2:** Definition of datasets
# FAIR data
The Immersify consortium will attempt to maximize the visibility and
exploitation of the project and its long-term impact, by providing as many as
possible publicly available results that can be easily discovered and re-used.
Specifically, the appropriately identified datasets (videos, publications,
software releases, etc.) will be generated and collected as a main project
outcome. With this purpose, the project will identify which datasets can be
made public, and which could be only available for project partners. This will
be treated on a case-by-case basis.
## Making data **Findable**
As not all datasets are not fully specified regarding the type and format, it
is a bit too early to provide detailed information about all metadata
standards, naming conventions and clear versioning procedures that will be
applied in scope of the project. However, an important subset of the data is
identified and will be described with basic metadata in order to make it easy
findable. The biggest part of the shared datasets is video content produced by
the project partners which will be described in well-defined and structured
way (web page template) and additionally a proprietary metadata file linked
with the video file will provide basic technical information about it (e.g.
file format, resolution, audio standard, rights, etc.). The final audio-video
content metadata format will be selected (EIDR membership is considered), when
the complete set of video content will be produced and made available. The
video content will be published in the dedicated section of the project
website as well as in popular video services like YouTube or Vimeo with self-
describing titles to make it findable with typical search engines (e.g.
google).
## Making data openly **Accessible**
In general, data generated by the project can be divided into two categories.
First category includes **internal data** (e.g. financial reports,
confidential software modules) accessible only for project partners and any
sharing of this data is not planned. Second group is composed of **datasets**
**which will be openly accessible** for research society and anyone
interested in project research and the results. Basic information about the
shared datasets will be listed on the public website of the project. There
will be dedicated section for communication materials including research
articles, press releases and presentations as well as dedicated sub-page with
all information about the video content will be publicly available. The
content itself will be published on the well-known video services (e.g.
YouTube, Vimeo) in appropriate formats and codecs ready for distribution. This
part of video datasets will be easy accessible without any specialized
software, only standard web browsers and media players will be required.
However, some part of the video content will be prepared for specific
visualisation installations such as caves, dome theaters or HMD devices and
will not be accessible without specialized software tools developed in scope
of the project (e.g. media player with HEVC and VR support) and dedicated
hardware. As the software tools will be a regular commercial product, it will
be delivered with full user documentation and SDK for researchers and
developers. Selected public data (e.g guidelines documents, content
descriptions) will be accessible via project web page and the content will be
stored in data centre provided by PSNC. In case of specific cooperation with
other project or researchers some raw materials or internal data can be also
shared on the bi-directional agreement basis.
The consortium intends to keep the Immersify brand and the webpage even after
the project end, so the created datasets which are publicly available will be
preserved. Project partners have agreed to register Immersify as an
international trademark.
## Making data **Interoperable**
Data formats for most data generated by the Immersify project will be typical
and universal (e.g. media files, office files) and therefore elaborate
metadata is not required for all datasets, in some cases simple text
description or basic metadata will be sufficient. Multimedia datasets, the
most significant part of the shared content, generated in scope of the project
will be provided in a typical and standardized audio/video formats such as
.avi or .mp4, encoded using HEVC/H.265 and for all the movie clips the basic
metadata file will be attached. In general, Immersify will make sure that
suitable standards will be chosen wherever possible to ease interoperability.
## Increase data **Re-use**
The public research data produced by the project partners will be made
available under the _Creative_ _Commons: Attribution-Share Alike_ (CC BY-SA)
licence but some parts of the video content will be purchased from the third
parties companies (e.g. BBC, ScanLab) and in this case the licence type will
be negotiated on a case-by-case basis. Further licensing details have to be
specified during the coming project months.
The datasets will be available as soon as they will be fully reviewed and
prepared for publication, no embargo is envisaged and Immersify will not make
any restrictions regarding the duration of their re-use. In case of long-term
interest data, they will be deposited in community repositories that are
expected to have a long lifetime (e.g. Zenodo repository). Before publication
all datasets will be validated according to the internal quality assessment
procedures defined in scope of Task 5.3. An optimal configurations of the
software tools will be tuned for maximizing the content quality and subjective
tests will be prepared in order to validate the improvements in terms of data
quality. Some datasets as well as parts of them will be used in research
publications and these datasets can be potentially directly re-used to repeat
the experiment or conduct further research. In that case, detailed terms and
conditions for its re-use will specyfied.
# Allocation of resources
The cost for making data FAIR will be, for the lifetime of the project,
covered by the Immersify consortium. All data management actions and
procedures are in scope of WP1 (Management and Coordination). Regarding long-
term preservation, the resources will be determined during the project life-
time based on the actual cost.
# Data Security
PSNC (Poznan Supercomputing and Networking Center) provides secure and
reliable storage ( _Nextcloud,_ _Seafile, Git instance_ ) for different
type of datasets collected in the project. PSNC has a data centre with whole
infrastructure necessary to provide redundancy of storage resources, power
supply, network connections and cooling systems. Additionally, the data centre
is equipped with active fire protection system, automatic gas extinguishing
system, access control and CCTV monitoring. Also, the construction of the
whole building in which the data centre is located complies with appropriate
standards.
# Ethical aspects
In the case of data where personal information is captured, the data will by
anonymized before being made available publicly. No other ethical issues have
been identified for the project at the moment.
# Other issues
Probably the new EU regulation - “ _REGULATION_ _(EU) 2016/679 OF THE
EUROPEAN PARLIAMENT AND OF THE COUNCIL of 27 April 2016 on the protection of
natural persons with regard to the processing of personal data and on the free
movement of such data, and repealing Directive 95/46/EC (General Data
Protection Regulation)_ ” will have be taken into consideration in the next
update of the DMP Deliverable because it enters into force on 25 May 2018.
No other issues have been identified so far.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1293_INVITE_763651.md
|
# Executive Summary
This document constitutes the initial version of the Data Management Plan
(DMP) and has been elaborated as a deliverable (D7.2) in the framework of the
INVITE project. INVITE aims at co-creating a better connected Open Innovation
(OI) ecosystem across Europe. It envisions an OI ecosystem in which knowledge
meaningfully flows across borders and is translated into marketable
innovations, bringing increased socioeconomic benefits to European citizens.
To this end, INVITE will co-design, pilot and demonstrate a suite of well-
tailored mechanisms, skill-sets and measures to be deployed by a pan-European
service platform, the Open Innovation Lab, with a view to better linking the
currently fragmented innovation systems of the EU by facilitating meaningful
cross-border knowledge flows; empowering EU businesses with the skill-sets
required to tap into Europe’s knowledgebase and turn it into value; and
increasing the participation of private investors in OI as well as
collaborative innovation projects.
The Open Innovation Lab will experiment with novel bottom-up collaborative
models of innovation and leverage OI support services and ICT tools to
stimulate and support OI across Europe, building a vibrant community of OI
actors and stakeholders (including academia, government, industry and civil
society) along the way. The valuable knowledge, evidence and experiences
gained through the experiments of the Open Innovation Lab will be diffused
across the EU so as to fuel their replication and scale-up for the benefit of
the European economy and society as a whole.
Under this light, it becomes evident that INVITE entails several activities
within its framework which involve the collection, production and /or
processing of data, with a view to generating meaningful insights that will
feed into the project and fuel the co-creation and delivery of truly demand-
driven and evidence-based results.
In this context, the initial version of INVITE’s DMP sets out the overall
methodological principles pertaining to the management of the data that that
will be collected, processed and/or generated in the framework of INVITE,
safeguarding sound, FAIR and ethical data management along the entire duration
of the project. Moreover, it provides a first, yet still meaningful overview
of INVITE’s datasets, as identified in this early stage of the project, along
with information on the specific methodology pertaining to their management on
a dataset by dataset basis.
The initial version of the DMP is the first of the three versions of INVITE’s
Data Management Plan to be produced in the course of the project and will
serve as living document. Along these lines, the DMP will be updated and
further elaborated during the project in order to reflect an accurate, up-to-
date and ultimately comprehensive plan for managing the data that will be
collected, and/or generated by the project across their entire life cycle,
both during and after the completion of INVITE.
# Introduction
The current document, titled Data Management Plan – Initial Version (DMP), has
been elaborated within the framework of the **INVITE** project which has
received funding by the European Union’s Horizon 2020 Research and Innovation
programme under Grant Agreement No 763651.
INVITE is set on co-creating a well-connected European Open Innovation (OI)
ecosystem. It envisions an OI ecosystem in which knowledge meaningfully flows
across borders and is translated into marketable innovations, bringing
increased socio-economic benefits to EU citizens. To this end, INVITE will co-
design, pilot and demonstrate a pan-European service platform, the **Open
Innovation Lab** , aiming to better link the currently fragmented innovation
systems of the EU by facilitating meaningful cross-border knowledge flows;
empower EU businesses with the skill-sets required to tap into Europe’s
knowledge-base and turn it into value; and increase the participation of
private investors in OI and collaborative innovation projects.
The Open Innovation Lab will experiment with novel bottom-up collaborative
models of innovation and leverage **OI support services** and **ICT tools** to
stimulate and support OI across Europe, building a vibrant community of OI
actors and stakeholders (including academia, government, industry and civil
society) along the way. The valuable knowledge, evidence and experiences
gained through the experiments of the Open Innovation Lab will be diffused
across the EU so as to fuel their replication and scale-up for the benefit of
the European economy and society as a whole.
To this end, INVITE has brought together a well-balanced and complementary
**consortium** , that consists of **9 partners across 5 different European
countries** , as presented in the following table.
## _Table 1: INVITE partners_
<table>
<tr>
<th>
**No**
</th>
<th>
**Name**
</th>
<th>
**Short name**
</th>
<th>
**Country**
</th> </tr>
<tr>
<td>
1
</td>
<td>
Q-PLAN INTERNATIONAL ADVISORS PC
</td>
<td>
Q-PLAN
</td>
<td>
Greece
</td> </tr>
<tr>
<td>
2
</td>
<td>
STEINBEIS INNOVATION GGMBH
</td>
<td>
SEZ
</td>
<td>
Germany
</td> </tr>
<tr>
<td>
3
</td>
<td>
EUROPE UNLIMITED SA
</td>
<td>
E-UNLIMITED
</td>
<td>
Belgium
</td> </tr>
<tr>
<td>
4
</td>
<td>
RTC NORTH LBG
</td>
<td>
RTC NORTH
</td>
<td>
United Kingdom
</td> </tr>
<tr>
<td>
5
</td>
<td>
NINESIGMA EUROPE BVBA
</td>
<td>
NINESIGMA
</td>
<td>
Belgium
</td> </tr>
<tr>
<td>
6
</td>
<td>
INTRASOFT INTERNATIONAL SA
</td>
<td>
INTRASOFT
</td>
<td>
Luxembourg
</td> </tr>
<tr>
<td>
7
</td>
<td>
CENTRE FOR RESEARCH AND TECHNOLOGY HELLAS
</td>
<td>
CERTH/ITI
</td>
<td>
Greece
</td> </tr>
<tr>
<td>
8
</td>
<td>
WIRTSCHAFTSFOERDERUNG REGION STUTTGART
</td>
<td>
WRS
</td>
<td>
Germany
</td> </tr>
<tr>
<td>
9
</td>
<td>
THE DURHAM, GATESHEAD, NEWCASTLE UPON TYNE, NORTH
TYNESIDE, NORTHUMBERLAND, SOUTH TYNESIDE AND
SUNDERLAND COMBINED AUTHORITY
</td>
<td>
NELEP
</td>
<td>
United Kingdom
</td> </tr> </table>
In this context, **all partners of INVITE’s consortium will adhere to sound
data management** in order to ensure that the meaningful data that will be
collected, processed and/or generated throughout the duration of the project
are well-managed, archived and preserved, taking into account the _Guidelines
on Data_ _Management in Horizon 2020_ .
Along these lines, the **objectives of this initial version of the DMP** are
to:
* Provide an overview of the principles underpinning the data management methodology that will be employed with a view to safeguarding the sound management of the data to be collected, processed and/or generated in the framework of INVITE, while also making them Findable, Accessible, Interoperable and Re-usable (FAIR).
* Identify the data that will be collected, processed and/or generated during the project as well as present meaningful information on how they will be handled, the methodology and standards applied to each one as well as how they will be curated and preserved during and after the project (if applicable) on a dataset by dataset basis.
With the above in mind, the initial version of **the DMP is** **structured in
4 distinct chapters** , as follows:
* **Chapter 1** provides introductory information about the initial version of the DMP, the context in which it has been elaborated as well as about its objectives and structure.
* **Chapter 2** describes the principles that are applied in the framework of INVITE in order to safeguard the effective management of data across their entire lifecycle, in line with the guidelines of the Commission.
* **Chapter 3** presents a description of the data that will be collected, processed and/or generated, addressing important relevant aspects such as methodology and metadata as well as data sharing, archiving and preservation on a dataset by dataset basis.
* **Chapter 4** outlines concludes on the next steps of the project with respect to data management.
Finally, the **Annex** of this document, includes the template for collecting
data management information per dataset in the framework of INVITE.
**The DMP is not a fixed document** . In fact, this is its **initial version**
and it will evolve during the lifespan of the project. In particular, the DMP
will be **further elaborated and updated twice throughout the duration of
INVITE** **(i.e. as D7.3 at M12 and as D7.4 at M36)** as well as ad hoc (if
necessary), in order to include new datasets, better detail and/or reflect
changes in the methodology applied or other aspects pertaining to the already
identified datasets (such as costs for making data FAIR, size of dataset,
metadata, etc.), changes in consortium policies and plans or other potential
external factors. Q-PLAN is responsible for the elaboration of the DMP and
with the support of all partners will update and enrich it when required.
# Data management principles
This chapter of the DMP presents the **overall** **methodological principles**
**pertaining to data management** in the framework of INVITE. Further details
with respect to the specific methodology applied for each of the different
datasets to be collected, processed and/or generated over the course of INVITE
(as identified in this early stage of the project) are provided in Chapter 3.
With that in mind, the DMP’s second chapter starts by providing a general
summary of the data to be collected, processed and/or generated by the project
as well as their types and versioning. It proceeds and concludes by presenting
the overall approach employed for making data FAIR, ensuring data security and
ultimately taking into account ethical aspects in this context.
## Data Summary
INVITE will produce several datasets during the lifetime of the project. The
data included within these **datasets may be quantitative, qualitative or a
blend** of those in nature and will be **analysed from a range of
methodological perspectives** with a view to producing meaningful insights
that will feed the activities of the project and fuel the delivery of
evidence-based results. These datasets will be **available in a variety of
easily accessible formats** , including post scripts (e.g. pdf, xps, etc.),
spreadsheets (e.g. xlsx, csv, etc.), text documents (e.g. docx, rtf, etc.),
compressed formats (e.g. rar, zip, etc.) or any other format required
depending on the objectives and methodology of the activity within the frame
of which they are produced.
Moreover, in order to facilitate the reference of the datasets that will be
collected and/or generated during INVITE, a **standard naming and versioning
structure** will be employed, as follows:
**INVITE _ [Name of Study] _ [Issue Date]**
* **INVITE:** The name of the project.
* **Name of Study:** A short version of the name of the study for which the dataset is created.
* **Issue Date:** The date on which the latest version of the dataset was modified (DD.MM.YYYY).
With the above in mind, some **indicative examples** to showcase the naming
structure applied in the context of INVITE are provided below:
* **INVITE_Needs &Requirements_31.10.2017 – ** A dataset generated within the framework of the survey conducted to identify the needs and requirements of diverse open innovation stakeholders. This is the version of the dataset that was last modified on the 31 st of October 2017 (31/10/2017).
* **INVITE_BMValidation_01.02.2018 –** A dataset created in the process of validating and improving the business models developed for the Open Innovation Lab with a view to feeding the elaboration of the business plan that will guide its market rollout beyond the end of the project. The last modification of this dataset was on the 1 st of February 2018 (01/02/2018).
## FAIR data principles
The _Guidelines on Data Management in Horizon 2020_ of the Commission
emphasise the importance of making the data produced by projects funded under
Horizon 2020 **Findable, Accessible, Interoperable as well as Reusable
(FAIR)** , with a view to ensuring their sound management. This means using
standards and metadata to make data discoverable, specifying the data sharing
procedures and which data will be open, allowing data exchange via open
repositories as well as facilitating the reusability of the data. With that in
mind, the following sections of the DMP lay out the principles followed in the
framework of INVITE with respect to the standards and metadata required to
make data findable, the sharing procedures foreseen to make the data
accessible and safeguard their interoperability as well as ensure their
preservation and open access, making them easily reusable by interested
stakeholders.
### Standards and Metadata
Any open datasets produced by INVITE will be accompanied by data that will
facilitate their understanding and re-use by interested stakeholders. These
data may include basic details that will assist interested stakeholders to
locate the dataset, including its format and file type as well as meaningful
information about who created or contributed to the dataset, its name and
reference, date of creation and under what conditions it may be accessed.
Complementary documentation may also encompass details on the methodology used
to collect, process and/or generate the dataset, definitions of variables,
vocabularies and units of measurement as well as any assumptions made.
Finally, wherever possible consortium partners will identify and use existing
standards.
### Sharing, Re-use and Interoperability
The Project Coordinator (Q-PLAN) in collaboration with the respective Work
Package Leaders (WPL) and any other involved project partners, will determine
whether and how the data collected and/or produced in the framework of
INVITE’s different activities will be shared and/or re-used either by other
project partners or by external interested stakeholders, within and without
the framework of the project. This includes the definition of access
procedures as well as potential embargo periods along with any necessary
software and/or other tools which may be required for data sharing and re-use.
In case the dataset cannot be shared, the reasons for this will be clearly
mentioned (e.g. ethical, rules of personal data, intellectual property,
commercial, privacy-related, security-related). An informed consent form will
be requested from all external data providers in order to allow for their data
to be analysed and shared, while all such data will be anonymised before
sharing (for more details in this respect see Section 2.4 of the current
document).
### Preservation and Open Access
Any datasets that will be deemed open for sharing and re-use will be deposited
to an open data repository and will be made accessible to all interested
stakeholders, ensuring their long-term preservation and accessibility beyond
the lifetime of the project. At the moment, we consider the use of Zenodo (
_www.zenodo.org_ ) as one of the best online and open services to enable open
access to INVITE’s datasets, but similar repositories will also be considered
and an appropriate decision will be timely made at a future update of the DMP.
Q-PLAN will be responsible for uploading all open datasets to the repository
of choice, while all partners will be responsible for disseminating them
through their professional networks and other media channels.
## Data security
INVITE will handle any collected / generated data securely throughout their
entire lifecycle. In this context, the project partner responsible for
collecting / generating, processing and/or storing the data will ensure that
they are protected and any necessary data security controls have been
implemented, so as minimize the risk of information leak and destruction.
Overall, data will be stored within the private server of the project partner
responsible for the respective dataset and will be backed-up frequently to
ensure their security.
## Ethical aspects
INVITE entails activities which involve the collection of meaningful data from
selected individuals (e.g. interviews with users and stakeholders of the Open
Innovation Lab, etc.). The collection of data from participants in these
activities will be based upon a process of informed consent. Any personal
information will be handled according to the principles laid out by the
Directive 95/46/EC of the European Parliament and of the Council on the
“Protection of individuals with regard to the processing of personal data and
on the free movement of such data” (24 October 1995) and its revisions as well
as with relevant national regulations and laws. The participants’ right to
control their personal information will be respected at all times (including
issues of confidentiality). The Project Coordinator (Q-PLAN) will regulate and
deal with any ethical issue that may arise during the project in this respect,
in cooperation with the Steering Committee of the project.
The ethics aspects pertaining to the collection and processing of data in the
framework of INVITE will be addressed in further detail within the ethics
deliverables of the project namely “D8.1: H - Requirement No.
1” and “D8.2: POPD - Requirement No. 2”, both due for M6 of the project.
# Data management plan
## Overview
INVITE places special emphasis on the management of the valuable data that
will be collected and/or generated throughout its activities. In this respect,
the table below provides a list of the datasets identified by INVITE
consortium members at this stage of the project, indicating the name of the
dataset, its linked Work Package and the respective leading consortium member
(i.e. Work Package Leader).
### Table 2: List of INVITE datasets
<table>
<tr>
<th>
**No**
</th>
<th>
**Dataset Name**
</th>
<th>
**Linked**
**Work**
**Package**
</th>
<th>
**Work Package**
**Leader**
</th> </tr>
<tr>
<td>
1
</td>
<td>
Market gaps and opportunities
</td>
<td>
WP1
</td>
<td>
CERTH/ITI
</td> </tr>
<tr>
<td>
2
</td>
<td>
User needs and requirements
</td>
<td>
WP1
</td>
<td>
CERTH/ITI
</td> </tr>
<tr>
<td>
3
</td>
<td>
Co-creation workshop outcomes
</td>
<td>
WP1
</td>
<td>
CERTH/ITI
</td> </tr>
<tr>
<td>
4
</td>
<td>
Pilot monitoring, co-evaluation and validation
</td>
<td>
WP3
</td>
<td>
RTC NORTH
</td> </tr>
<tr>
<td>
5
</td>
<td>
Open Innovation Lab user data
</td>
<td>
WP3
</td>
<td>
RTC NORTH
</td> </tr>
<tr>
<td>
6
</td>
<td>
Business model validation and improvement
</td>
<td>
WP4
</td>
<td>
Q-PLAN
</td> </tr>
<tr>
<td>
7
</td>
<td>
Dissemination and communication results
</td>
<td>
WP6
</td>
<td>
E-UNLIMITED
</td> </tr> </table>
With the above in mind, **the current chapter provides meaningful information
per each dataset** , including:
* The name of the dataset and the type of study in the frame of which it is produced.
* A concise description of the dataset as well as its format and volume.
* The methodology and tools employed for collecting/generating the data.
* Any standards that will be used (if applicable) as well as metadata to be created.
* Potential external stakeholders for whom the data may prove useful.
* Provisions regarding the confidentiality of the data.
**The information provided within this section reflects the current views and
plans of INVITE at this early stage of the project (M3) and will be further
elaborated** **in future versions of the DMP** (e.g. through the inclusion of
more elaborate descriptions of the datasets, standards and metadata, how the
datasets may be preserved, accessed and re-used in the long-term, etc.). The
template employed for collecting the information from project partners is
annexed to this document.
## Market gaps and opportunities (WP1)
<table>
<tr>
<th>
**Dataset name**
</th>
<th>
Analysis of Market Gaps and Opportunities (INVITE_Gaps&Opportunities).
</th> </tr>
<tr>
<td>
**Type of study**
</td>
<td>
This study involves an evaluation of three existing Open Innovation (OI)
platforms/service providers: Enterprise Europe Network (EEN), NineSigma and
Steinbeis. The data that supports this evaluation will be collected in two
phases. Phase 1 involves a desk review of secondary data sources that are in
the public domain, whereas Phase 2 involves a series of semi-structured
interviews with 6-8 respondents who work for either an EEN consortium member,
NineSigma or Steinbeis for the Open Innovation Lab.
</td> </tr>
<tr>
<td>
**Dataset description**
</td>
<td>
The data collected will consist of a combination of information extracted from
the secondary data sources and information provided by the respondents during
the depth interviews. In both cases, the data collected will be mainly of a
qualitative nature and will be recorded in plain text in English. Both the
secondary and primary data sources will be used as the basis of a report which
will summarise the strengths and weaknesses of the three existing OI
platforms/service providers and identify potential market gaps and
opportunities.
</td> </tr>
<tr>
<td>
**Methodologies for data collection / generation**
</td>
<td>
A semi-structured interview guide will be used to collect the data during the
depth interviews. The interviews will be conducted face-to-face, by telephone
or via Skype/WebEx (or similar online videoconferencing application).
</td> </tr>
<tr>
<td>
**Storage and volume of the dataset**
</td>
<td>
In the case of the secondary data sources, copies of the data will not be
stored. However, references and/or web links to the data sources consulted
will be included as an appendix to the final report. In the case of the
primary data collected during the depth interviews, the dataset will be stored
in standard Word format (.docx) on RTC NORTH’s in-house server and will be
preserved for 5 years following the end of the project, before eventually
being deleted. For security reasons, the dataset will be backed up at the end
of each night on removable tape and will be stored in a fireproof safe.
</td> </tr>
<tr>
<td>
**Metadata and standards**
</td>
<td>
The dataset will be accompanied with basic descriptive metadata (i.e. title,
author, date created and keywords).
</td> </tr>
<tr>
<td>
**For whom might the dataset be useful?**
</td>
<td>
The dataset will be used by INVITE partners to help design and pilot the Open
Innovation Lab, based on the needs and requirements of a range of open
innovation stakeholders.
</td> </tr>
<tr>
<td>
**Confidentiality**
</td>
<td>
The primary data collected via the depth interviews will not be shared and/or
reused (outside the framework of the project and/or beyond its completion) to
ensure the confidentiality of the interviewees and their responses.
</td> </tr> </table>
## User needs and requirements (WP1)
<table>
<tr>
<th>
**Dataset name**
</th>
<th>
Analysis of user needs and requirements (INVITE_Needs&Requirements)
</th> </tr>
<tr>
<td>
**Type of study**
</td>
<td>
This qualitative study to be conducted in the framework of INVITE will follow
an interview-based survey of user needs and requirements. In particular, a
stratified, purposeful sampling will be employed, in order to include diverse
stakeholders of a quadruple helix innovation system (e.g. private sector,
academic organisations & research institutes, governmental and public services
and civil society), and aimed at revealing their views regarding Open
Innovation practices across Europe and informing the demand-driven development
of INVITE’s Open Innovation Lab. In this case, the sample will be stratified
in the sense that participants will vary according to stakeholder sector and
level of engagement and purposeful as sample participants will be recruited
through key organizations/businesses, which the project partners will identify
as impactful to the project’s expected outcomes.
</td> </tr>
<tr>
<td>
**Dataset description**
</td>
<td>
The data collected will contain the responses (mainly qualitative – plain text
in English) provided by interviewees who will participate in the interview-
based survey, addressing their different views of the current state-of-play in
Open Innovation across Europe. No secondary data or other third-party sources
will be used.
</td> </tr>
<tr>
<td>
**Methodologies for data collection / generation**
</td>
<td>
Data will be collected via semi-structured questionnaires administered to
study participants in the frame of interviews. Participants are to be
recruited from all actors of the quadruple helix and invited to participate in
the interviews over the phone, e-mail, or a face-to-face meeting.
</td> </tr>
<tr>
<td>
**Format and volume of the dataset**
</td>
<td>
Private versions of the dataset will be accommodated at the data management
portal created and maintained by CERTH/ITI, while links to the portal will
exist at the INVITE website with access restricted to the INVITE partners. The
dataset, comprised of interview responses, will be stored in standard
spreadsheet format (.xlsx). At least 50 interviews will be conducted across
Europe and the same number of records will be collected and stored in the
dataset. Furthermore, in order to avoid data losses, RAID and other common
backup mechanisms will be utilized ensuring data reliability and performance
improvement. The archiving system of CERTH/ITI will contain the initial data
as sent to the INVITE repository. The dataset will remain at the data
management portal for the whole project duration, as well as at least 2 years
after the end of the project. The volume of data is estimated to be
approximately 10 MB for all interview responses collected and the analysis
report submitted. Finally, after the end of the project, the portal is going
to be accommodated with other portals at the same server for 5 years following
the end of the project, before eventually being deleted, so as to minimize the
cost of
</td> </tr>
<tr>
<td>
</td>
<td>
maintenance. For security reasons, the dataset will be backed up in an
external hard-drive every month.
</td> </tr>
<tr>
<td>
**Metadata and standards**
</td>
<td>
Basic descriptive metadata (such as title, author, date created and keywords)
will accompany the dataset.
</td> </tr>
<tr>
<td>
**For whom might the dataset be useful?**
</td>
<td>
The dataset will be used by CERTH/ITI in order to analyse and extract the
topics of interest that will inform INVITE partners in the process of
designing the INVITE Cocreation Workshop and ultimately the first version of
the Open Innovation Lab according to the needs and requirements of open
innovation stakeholders.
</td> </tr>
<tr>
<td>
**Confidentiality**
</td>
<td>
The dataset will remain confidential. Participant names will be protected and
replaced by an abbreviated version of their sector, a sequential two-digit
number and the initials of the project partner who conducted the interview
(e.g. ARxxQPLN for Academia/Research sector, PSxxQPLN for Public Sector,
BSxxQPLN for the Private Sector, CSxxQPLN for Civil Society Sector, where xx
is a sequential two-digit number corresponding to the interviewee). The raw
data collected through the interview-based survey will not be shared and/or
re-used (outside the framework of the project and/or beyond its completion) to
ensure the confidentiality of the interviewees and their responses.
</td> </tr> </table>
## Co-creation workshop outcomes (WP1)
<table>
<tr>
<th>
**Dataset name**
</th>
<th>
Co-creation workshop outcomes (INVITE_Co-creation).
</th> </tr>
<tr>
<td>
**Type of study**
</td>
<td>
A co-creation workshop whereby all invited participants will explore and
provide feedback on the various aspects and features of the Open Innovation
Lab to be cocreated.
</td> </tr>
<tr>
<td>
**Dataset description**
</td>
<td>
A record of feedback and suggestions expressed by participants of the
workshop.
</td> </tr>
<tr>
<td>
**Methodologies for data collection / generation**
</td>
<td>
All participants will be introduced to a series of ideas and concepts and
along with them INVITE partners will co-create and conclude on user-driven
characteristics for the Open Innovation Lab and interconnected pilot programme
of the project. Different workshop sessions will be dedicated to discussing
and co-creating the different components of the Open Innovation Lab (i.e.
service, capacity building and ICT components) while at the same time
collecting interesting ideas and feedback with respect to the design of INVITE
pilots. Participants will be engaged in creative brainstorm and ideation
sessions that will also include gamified exercises, which will serve as ice
breakers allowing them to get to know each other better, break down their
inhibitions and thus unlock their imagination. Simple illustrators will be
employed to visually capture ideas and help them grow into concrete concepts.
</td> </tr>
<tr>
<td>
</td>
<td>
These will be recorded and be made available to INVITE partners for the design
of the Open Innovation Lab.
</td> </tr>
<tr>
<td>
**Format and volume of the dataset**
</td>
<td>
In the case of the primary data collected, including feedback and ideas
resulting from the co-creation process, the dataset will be stored in standard
Word format (.docx) on RTC NORTH’s in-house server and will be preserved for 5
years following the end of the project, before eventually being deleted. For
security reasons, the dataset will be backed up at the end of each night on
removable tape and will be stored in a fireproof safe.
</td> </tr>
<tr>
<td>
**Metadata and standards**
</td>
<td>
The dataset will be accompanied with basic descriptive metadata (i.e. title,
author, date created and keywords).
</td> </tr>
<tr>
<td>
**For whom might the dataset be useful?**
</td>
<td>
The dataset will be used by INVITE partners to help better design and shape
the Open Innovation Lab to its users, based on the resulting feedback and
ideas of the co-creation workshop participants.
</td> </tr>
<tr>
<td>
**Confidentiality**
</td>
<td>
The primary data collected via the co-creation workshop will not be shared
and/or re-used (outside the framework of the project and/or beyond its
completion) to ensure the confidentiality of the participants and their
contributions.
</td> </tr> </table>
## Pilot monitoring, co-evaluation and validation (WP3)
<table>
<tr>
<th>
**Dataset name**
</th>
<th>
Pilot monitoring, co-evaluation and validation (INVITE_PilotMCV)
</th> </tr>
<tr>
<td>
**Type of study**
</td>
<td>
The study will involve deploying the pilots of INVITE through the Open
Innovation Lab in two rounds, in order to address the needs of users and
stakeholders of the open innovation ecosystem as well as to collect data and
feedback from them, so as to improve and fine tune the design of the pilots
and the Open Innovation Lab.
</td> </tr>
<tr>
<td>
**Dataset description**
</td>
<td>
Data and feedback provided by users (mainly SMEs) and other stakeholders
participating in the pilots of INVITE based on a custom-made multi-layer
framework for monitoring the operation, performance and results of the pilots.
Indicative core themes to be monitored and measured include the integration of
open innovation in the users’ business model, external knowledge and
technology search and acquisition, collaboration with other stakeholders,
occasional vs continuous engagement in open innovation activities, disruptive
vs incremental innovation, internal innovation capability, time-to-market,
level of proficiency gained in collaborative innovation, scale achieved in
terms of outreach (volume, sectoral and geographical), fundraising capacity,
staff impact, organizational impact, costbenefit and overall satisfaction.
Moreover, the dataset will include qualitative data
</td> </tr>
<tr>
<td>
</td>
<td>
on the perceived most significant change which pilots have brought about
within the organizations of pilot users.
</td> </tr>
<tr>
<td>
**Methodologies for data collection / generation**
</td>
<td>
Users and stakeholders who will participate in the pilots will be required to
provide feedback as part of the ongoing monitoring framework that will be
established to keep track of the performance and results of the pilots through
a semi-structured questionnaire designed to mine both qualitative and
quantitative and administered via a blend of face-to-face, telephone and/or
digital means.
</td> </tr>
<tr>
<td>
**Storage and volume of the dataset**
</td>
<td>
The dataset will be stored in either standard Word (.docx) or standard Excel
(.xlsx) format on RTC NORTH’s on-premise server and will be preserved for 5
years following the end of the project, before eventually being deleted. For
security reasons, the dataset will be backed up at the end of each night on
removable tape and will be stored in a secure place.
</td> </tr>
<tr>
<td>
**Metadata and standards**
</td>
<td>
The dataset will be accompanied with basic descriptive metadata (i.e. pilot
description, pilot participant name, organisation, etc.).
</td> </tr>
<tr>
<td>
**For whom might the dataset be useful?**
</td>
<td>
The dataset will be used by INVITE partners to validate and fine-tune the Open
Innovation Lab with the help of its Advisory Board members, based on the needs
and requirements of a range of open innovation users and stakeholders who will
have taken part in its interconnected pilot programme. Moreover, the dataset
would be useful for open innovation intermediaries as well as (e-)training
providers who design and/or offer relevant open innovation services.
</td> </tr>
<tr>
<td>
**Confidentiality**
</td>
<td>
The data and feedback collected will not be shared and/or re-used (outside the
framework of the project and/or beyond its completion) to ensure the
confidentiality of the pilot programme participants and their input into the
iterative pilot deployment, validation and fine-tuning phase. Still, some
selected meaningful and properly anonymised aggregated data and results from
pilot activities will be integrated within the INVITE Replication and Scale-up
Guide that will be openly disseminated, with a view to providing interested
stakeholders with insights and practical guidelines on how to replicate the
results of INVITE within their context.
</td> </tr> </table>
## Open Innovation Lab user data (WP3)
<table>
<tr>
<th>
**Dataset name**
</th>
<th>
Open Innovation Lab user data (INVITE_OI2Labdata)
</th> </tr>
<tr>
<td>
**Type of study**
</td>
<td>
Study aimed at informing the validation and fine-tuning of the Open Innovation
Lab and its ICT infrastructure (including the online collaborative space, open
multi-sided marketplace, crowdfunding tool and e-learning environment) as well
as of the business models designed to guide its market rollout and scale up.
</td> </tr>
<tr>
<td>
**Dataset description**
</td>
<td>
The dataset will contain demographic data of the users registered to Open
Innovation Lab as well as data stemming from the use of the functionalities
offered by the ICT tools integrated within its platform.
</td> </tr>
<tr>
<td>
**Methodologies for data collection / generation**
</td>
<td>
Users and stakeholders wishing to employ the ICT tools of the Open Innovation
Lab will be required to provide basic data through a dedicated online
registration form built-in the ICT infrastructure to be implemented by
INTRASOFT. Moreover, the platform will automatically keep track of all
necessary data pertaining to the online activities of the users who will
access the Open Innovation Lab via their unique username-password combination.
</td> </tr>
<tr>
<td>
**Storage and volume of the dataset**
</td>
<td>
The data collected will be stored on a secure database to be employed by the
ICT infrastructure of the Open Innovation Lab. INTRASOFT will be the main
responsible partner to manage the server and to assign specific user roles
with access and administration privileges. Administration privileges can be
assigned only to the consortium members. There will be a secure environment
per user, in order to guarantee the integrity of the user’s personal data.
</td> </tr>
<tr>
<td>
**Metadata and standards**
</td>
<td>
Descriptive metadata (i.e. title, author and keywords) will be created to
accompany the dataset.
</td> </tr>
<tr>
<td>
**For whom might the dataset be useful?**
</td>
<td>
The dataset will be used by INVITE consortium members to gain insights,
analyze and improve the experience of users within the ICT infrastructure of
the Open Innovation Lab and thus provide the evidence required to fuel its
demand-driven validation and fine-tuning.
</td> </tr>
<tr>
<td>
**Confidentiality**
</td>
<td>
In order to ensure the privacy of the users of the Open Innovation Lab as well
as safeguard the project partners’ commercial interests in view of its post-
project market roll-out, the dataset will remain confidential. The data will
not be shared and/or reused outside the framework of the project and/or beyond
the time of its completion.
</td> </tr> </table>
## Business model validation and improvement (WP4)
<table>
<tr>
<th>
**Dataset name**
</th>
<th>
Business model validation and improvement (INVITE_BMCValidation).
</th> </tr>
<tr>
<td>
**Type of study**
</td>
<td>
A study blending both quantitative and qualitative elements based on an
interviewbased survey of users and stakeholders participating in the pilot
activities of the Open Innovation Lab as well as members identified as
potential early adopters and lead users from the Advisory Board of the
project. The main target of the study is to validate and improve the business
models developed for the Open Innovation Lab with a view to feeding the
elaboration of the business plan that will guide its market rollout beyond the
end of the project.
</td> </tr>
<tr>
<td>
**Dataset description**
</td>
<td>
The dataset will be comprised of both qualitative as well as quantitative data
regarding the different elements (e.g. revenue streams, value propositions,
collaborators, etc.) of the business models designed for the Open Innovation
Lab.
</td> </tr>
<tr>
<td>
**Methodologies for data collection / generation**
</td>
<td>
Data will be collected via interviews utilising semi-structured questionnaires
that will balance the flexibility of open-ended questions and the focus of
restricted ones that can be easily quantified. The questionnaires will be
administered either faceto-face or via electronic means of communication (e.g.
web conferencing apps, telephone, etc.).
</td> </tr>
<tr>
<td>
**Storage and volume of the dataset**
</td>
<td>
The dataset will be stored in a simple spreadsheet format (such as .xlsx or
similar) which will be kept at the private server of Q-PLAN. The number of the
records within the dataset will depend on the methodology which will be
designed at a later stage of the project and utilised in the framework of the
study. For security reasons, the dataset will be backed-up on a daily basis
automatically from Q-PLAN’s server in an external hard disk drive owned by the
company and kept in its premises.
</td> </tr>
<tr>
<td>
**Metadata and standards**
</td>
<td>
Descriptive metadata will be created to accompany the dataset (i.e. title,
author and keywords).
</td> </tr>
<tr>
<td>
**For whom might the dataset be useful?**
</td>
<td>
The dataset will be of great use to INVITE consortium members, enabling them
to validate and refine the business models of the Open Innovation Lab, paving
the way for its commercial exploitation after the completion of the project.
</td> </tr>
<tr>
<td>
**Confidentiality**
</td>
<td>
In order to ensure the privacy of the study’s participants as well as
safeguard the project partners’ commercial interests that may arise from the
exploitation of the Open Innovation Lab, the dataset will remain confidential.
The data will not be shared and/or reused outside the framework of the project
and/or beyond the time of its completion.
</td> </tr> </table>
## Dissemination and communication results (WP6)
<table>
<tr>
<th>
**Dataset name**
</th>
<th>
Dissemination and communication results (INVITE_Dissemination).
</th> </tr>
<tr>
<td>
**Type of study**
</td>
<td>
Study aimed at monitoring and assessing the results of the different
dissemination and communication activities of the project using an appropriate
framework with quantitative metrics.
</td> </tr>
<tr>
<td>
**Dataset description**
</td>
<td>
Data collected with a view to measuring and assessing the results of the
project in terms of dissemination and communication with measurable metrics
tailored to the different respective channels and tools employed as well as
the activities deployed.
</td> </tr>
<tr>
<td>
**Methodologies for data collection / generation**
</td>
<td>
Primary data will be collected through the dissemination activity reports of
project partners regarding media products, project events, external events,
general publicity, etc. Third party tools will be employed as well (e.g.
Google analytics for metrics related to the website, social media statistics,
information reported by the partners about published articles in media,
outreach at third party events etc.). Only aggregated data will be collected
in this dataset.
</td> </tr>
<tr>
<td>
**Storage and volume of the dataset**
</td>
<td>
The dataset will be stored in standard spreadsheet format (.xlsx) and its size
will evolve throughout the course of the project. Moreover, the dataset is
being kept at the private server of the project’s Dissemination Manager
(E-UNLIMITED) and will be shared with the project partners for reporting and
progress assessment purposes. It will be preserved for 5 years following the
end of the project, before eventually being deleted and for security reasons,
it will be backed up in an external hard-drive every week.
</td> </tr>
<tr>
<td>
**Metadata and standards**
</td>
<td>
Descriptive metadata will be provided (such as title, type of data, data
collection method and keywords).
</td> </tr>
<tr>
<td>
**For whom might the dataset be useful?**
</td>
<td>
The dataset will be used by INVITE partners for reporting and progress
assessment purposes. The data might also be used for dissemination purposes to
promote achievements of the project.
</td> </tr>
<tr>
<td>
**Confidentiality**
</td>
<td>
The raw data collected will not be shared and/or re-used (outside the
framework of the project and/or beyond its completion). Still, some selected
anonymous extracts might be included in project reports and deliverables which
are public and therefore become publicly available. No personal information
shall be included in the dataset and therefore no personal information shall
be made public.
</td> </tr> </table>
# Conclusions and way forward
This initial version of INVITE’s Data management Plan (DMP) has set the stage
for the implementation of a sound data management methodology in the framework
of INVITE. To this end, it has laid out the overall methodological principles
to be followed by project partners in this respect with a view to making the
data which they will collect, process and or generate as FAIR as possible,
while also taking into account data security and ethical aspects.
Moreover, it has provided an initial, yet still meaningful overview of the
valuable datasets that are expected to be created within the context of the
project along with a description of each dataset emphasizing the methodology
to be followed for their management during the lifespan of the project and
beyond. The information provided in this respect will be enriched as the
activities of the project progress to provide a more accurate description at
later stages of INVITE’s implementation.
Indeed, the DMP is a living document to be updated throughout the course of
INVITE based on the latest developments and available project results. In
fact, its initial version will serve as the basis for producing two
additional, further elaborated versions, on M12 and M36 of the project, with a
view to delivering an accurate, up-to-date and comprehensive data management
plan before the completion of INVITE.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1294_SMARTEES_763912.md
|
# INTRODUCTION AND OVERVIEW
## General overview of data collection activities in SMARTEES
SMARTEES is a multimethod project where the established methodologies of
social simulation and social innovation research are merged to conduct
multimethod analyses of the socio-political conditions – including contextual
socioeconomic structures – that encourage and discourage collective energy use
patterns. Its key methodological strength comes from combining both
qualitative and quantitative methodologies with a user-centred approach from
the perspectives of sociology, psychology, economics, political science,
social simulation, and social innovation. This allows us to integrate
interdisciplinary knowledge into a comprehensive modelling framework, a unique
policy analysis tool – the SMARTEES policy sandbox, providing support to the
success of the Energy Union by developing alternative, more robust policy
pathways that contribute to identifying ways to overcome citizen resistance
and increase acceptability. This robust and transparent analytical tool, in
turn, will meet the pressing need identified in the SET-plan roadmap by
offering practitioners new ways of developing, testing and adapting policy
measures and technology diffusion strategies to implement the SET-plan. The
work in the project is delivered in different work packages, which have their
own methodological approaches.
## Purpose and scope of this document
This DMP first gives a comprehensive overview of all data types collected in
SMARTEES, including a detailed mapping of data collection activities to work
packages and SMARTEES partners (see Section 2). Afterwards, each partner’s
responsibilities with respect to data collections, data handling and analysis,
as well as data storage are identified (Section 3). In the final section, the
document defines SMARTEES standards for all data collections, (including
research ethical standards and GDPR procedures), data storage and handling,
data documentation, pseudonymization or anonymization, access to data for
exploitation and future use, and data deletion. It needs to be noted, that a
DMP is understood to be a dynamic document that will be adjusted and adapted
during the course of the project. It will be integrated into the SMARTEES
project handbook (Deliverable 1.3) which will be implemented in SMARTEES as a
WIKI-based knowledge / procedures database constantly updated. In that
respect, the DMP describes the SMARTEES data management at the point in time
it was delivered to the European Commission. The document was written with
reference to the Guidelines to FAIR data management in Horizon 2020 (
_http://ec.europa.eu/research/participants/data/ref/h2020/grants_manual/hi/oa_pilot/h2020-hi-
oa-datamgt_en.pdf_ ) and the GDPR (Regulation (EU) 2016/679).
## Future revisions of this document
The DMP is a dynamic document, which will be constantly updated. Since the DMP
will become part of the dynamic project handbook (Deliverable D1.3), it will
be integrated in the latter´s revision cycles. However, a formal revision of
the DMP will be provided on an annual basis, which means in July 2019 and July
2020.
# DATA COLLECTION
SMARTEES is a mixed methods project and its methodological strength comes from
combining qualitative and quantitative techniques with an interdisciplinary
approach. This allows us to integrate interdisciplinary knowledge into a
comprehensive modelling framework, which provides practitioners a unique
analysis tool for policy measures and technology diffusion strategies, by
conducting multimethod analysis of the socio-political conditions encouraging
/ discouraging collective energy use patterns across European member states.
SMARTEES includes data collection and handling activities in most of the WPs
(excluding WP1 and WP9), which strongly depend on each other. This complexity
demands a strict coordination between the different tasks and WPs as they
depend on one another, and input from preceding tasks is not only required
within the same WP but also in other WPs. Furthermore, SMARTEES is a project
that depends strongly of secondary data provided by the case partners (and
other sources). This makes it necessary to define procedures for how data
access rights for secondary data are achieved and how that data is used and
matched with primary data.
Table 1 presents an overview of the various data collections that are part of
SMARTEES (see the first column) and indicates which WP(s) participate in each
data collection or data handling. For example, the first data collection is
“Documents and Records” and WPs 3 and 4 contribute to this data collection.
#### _Table 1:_ _Data collection methods used in different WPs_
<table>
<tr>
<th>
**Method/WP**
</th>
<th>
**WP1**
</th>
<th>
**WP2**
</th>
<th>
**WP3**
</th>
<th>
**WP4**
</th>
<th>
**WP5**
</th>
<th>
**WP6**
</th>
<th>
**WP7**
</th>
<th>
**WP8**
</th>
<th>
**WP9**
</th> </tr>
<tr>
<td>
Secondary data acquisition (i.e.
documents and records)
</td>
<td>
Overall data management and
acquisition procedures for
secondary data
</td>
<td>
Sets the framework and data
collection guidelines
</td>
<td>
✓
</td>
<td>
✓
</td>
<td>
</td>
<td>
Synthesizes/
analyses
data
</td>
<td>
Sets the standards for data
collection/curation
</td>
<td>
Di
sseminates/exploits results
</td>
<td>
Sets ethical requirements
</td> </tr>
<tr>
<td>
Interviews
</td>
<td>
✓
</td>
<td>
✓
</td>
<td>
</td> </tr>
<tr>
<td>
“On site” visits and observations
</td>
<td>
✓
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Questionnaire surveys
</td>
<td>
</td>
<td>
✓
</td>
<td>
</td> </tr>
<tr>
<td>
Focus groups
</td>
<td>
✓
</td>
<td>
✓
</td>
<td>
</td> </tr>
<tr>
<td>
Discussions
</td>
<td>
</td>
<td>
✓
</td>
<td>
✓
</td> </tr>
<tr>
<td>
Workshops
</td>
<td>
✓
</td>
<td>
</td>
<td>
✓
</td>
<td>
✓
</td> </tr> </table>
## Research data and personal information for non-research purposes
We define data as all research data obtained from respondents directly or
indirectly, through various research methods, for the project research
purposes. We distinguish data, as defined, from personal information used for
external communication purposes in WP8 (for example quotes from experts or
interviewees for project videos, blogs, etc. meant for communication). Such
personal information will be published only after informed written consent of
the involved individuals has been given 1 . In this latter instance,
personal information used for communication will not be anonymized, encrypted
or pseudonymized.
## Data types
Table 2 presents key characteristics of each data collection in SMARTEES. The
first column indicates the type of data collection, the second column
indicates where the data, for each data collection method, come from, the
third column indicates how the data are collected, the fourth column indicates
whether data, from a given data collection type, will be published in an open
access mode at the end of the project, the next column lists the tasks and/or
WPs that contribute to a given data collection type, and finally the last
column names all partners involved in a given data collection. Note that the
same data collection type may be used in several independent data collections
in different WPs.
##### _Table 2:_ _Details of data collection_
<table>
<tr>
<th>
**Type of data collection**
</th>
<th>
**Source of data**
</th>
<th>
**How data is collected**
</th>
<th>
**Open access**
</th>
<th>
**WP/task**
</th>
<th>
**Partners** (lead partners underlined)
</th> </tr>
<tr>
<td>
Documents and records / secondary data acquisition
</td>
<td>
Available documents and data (previous studies and research; case evaluations/
assessments; plans of action and periodical reports implemented in each case
study; other relevant documents in each case studies; social innovation
cluster assessments)
</td>
<td>
Documents published by relevant stakeholders are identified through our case
contacts, in the European Commission’s document database, and through search
engines (e.g. Google, Google Scholar) using appropriate search words (e.g.,
“energy poverty”, “everyday mobility”).
In addition, relevant scientific and policy papers are identified through
databases (e.g. Web of Science, Scopus). Other papers indicating additional
potentially useful sources of information will be subsequently explored as
well.
Existing quantitative datasets will also be acquired, following permission
procedures where necessary.
</td>
<td>
Y, if no restrictions by the owners of the data are made
</td>
<td>
1.3, 2.3,
WP3,
4.1, 4.2,
4.9,
WP5,
6.1-6.3, WPs 7 and 8
</td>
<td>
_EI-JKU_ , _JH_ ,
_K &I _ , _NTNU_ ,
_UG_ , _UOT_ ,
ACC, ICLEI,
SAM-EA,
UDC, UI
</td> </tr>
<tr>
<td>
Individual indepth interviews
</td>
<td>
Responses of key informants and
stakeholders
(e.g.,
policymakers, social innovation pioneers/
</td>
<td>
In-depth interviews with the case studies’ main actors will be conducted as
part of WP3.
Interviewees will be selected by taking into account the features of each case
study.
</td>
<td>
Y, after
pseudonymization with consent of the interviewees
</td>
<td>
1.3, 2.3,
WP3,
4.4,
WP5,
6.1-6.3, WPs 7 and 8
</td>
<td>
_EI-JKU_ , _K &I _ ,
_NTNU_ , _UDC_ ,
_UG_ , _UOT_ ,
ICLEI, JH,
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
frontrunners, social actors, sustainable entrepreneurs / business actors)
</th>
<th>
A detailed interview protocol appropriate to use individual interviews will be
developed in task 4.4 (WP4).
</th>
<th>
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
“On site” observations
</td>
<td>
Observational data from site visits
</td>
<td>
Direct observations through “on site” visits and field visits by project
partners and representatives from learning case cities/regions
</td>
<td>
Y, as brief case study reports
</td>
<td>
1.3, 2.3,
WP3,
4.4,
WP5,
6.1-6.3, WPs 7 and 8
</td>
<td>
_EI-JKU_ , _ICLEI_ ,
_K &I _ , _NTNU_ ,
_UG_ , _UOT_ ,
ACC, JH,
SAM-EA,
UDC, UI
</td> </tr>
<tr>
<td>
Questionnaire surveys
</td>
<td>
Responses of participants across all case study clusters
</td>
<td>
WP4 will coordinate the overall efforts related to the surveys.
A core questionnaire, allowing a level of comparability across all case
studies, will be developed and later adapted for each case cluster.
The case responsible partner will lead data collection in each case study in a
subtask for each case.
</td>
<td>
Y, after
pseudonymi-
zation; respondents will be informed prior to data collection
</td>
<td>
1.3, 2.3,
3.4,
WP4,
WP5,
6.1-6.3, WPs 7 and 8
</td>
<td>
_EI-JKU_ , _JH_ ,
_K &I _ , _NTNU_ ,
_UG_ , _UOT_ ,
ICLEI, UDC,
UI
</td> </tr>
<tr>
<td>
Focus group interviews
</td>
<td>
Key informants and
stakeholders
(e.g.,
policymakers, social innovation pioneers, social actors,
sustainability entrepreneurs /business actors)
</td>
<td>
Focus group interviews with case-studies main actors will be conducted as part
of WP3.
Individuals belonging to each groups will be selected by taking into account
the features of each case study.
A detailed focus group interview protocol will be developed in task 4.4 (WP4).
Focus group interviews with case
</td>
<td>
Y, after
pseudonymization with consent of the interviewees
</td>
<td>
1.3, 2.3,
WP3,
4.4,
WP5,
6.1-6.3, WPs 7 and 8
</td>
<td>
_EI-JKU_ , _K &I _ ,
_NTNU_ , _UDC_ ,
_UG_ , _UOT_ ,
ICLEI, JH,
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
studies main actors will be conducted as part of WP3.
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Discussion events
</td>
<td>
Responses of project partners, case study teams and actors
</td>
<td>
JH will arrange a series of in-depth discussions with project partners and
casestudy teams regarding requirements for primary data in each case.
UOT will engage a sample of citizens, consumers, social and business actors,
including social innovators to discuss forthcoming energy policy
implementation.
</td>
<td>
Y, in the form of brief reports
</td>
<td>
1.3, 2.3, WP4,
WP5,
6.1-6.3, WPs 7 and 8
</td>
<td>
_EI-JKU_ , _JH_ ,
_NTNU_ , _UG_ ,
_UOT_ , ACC,
ICLEI, K&I,
SAM-EA,
UDC, UI
</td> </tr>
<tr>
<td>
Workshops
</td>
<td>
Responses of workshop
participants (project partners, case study actors teams and follower city
represen-
tatives)
</td>
<td>
Data on views of relevant experts and stakeholders are collected during
research workshops, during data analysis workshop, during multi-stakeholders
workshops, and during workshops with follower cities/regions.
</td>
<td>
Y, in the form of brief reports
</td>
<td>
1.3,
WP2,
3.2,
WP4,
WP5,
6.1-6.3, WPs 7 and 8
</td>
<td>
_EI-JKU_ , _ICLEI_ ,
_JH_ , _NTNU_ ,
_UG_ , _UOT_ ,
ACC, K&I,
SAM-EA,
UDC, UI
</td> </tr> </table>
# PARTNER RESPONSIBILITIES
The data collections and data processing, each WP is responsible for, is
described in this section. The data collection/processing tasks in WP4 are
described in more detail than the tasks in other WPs, because WP4 deals with
the main data collection/handling and coordination tasks that support the work
in other WPs. For a complete overview of data collection responsibilities see
_**Appendix I** _ (constantly updated during the project).
## WP1 (NTNU)
In the SMARTEES project, NTNU (WP1 lead beneficiary) is responsible for
management of the project. SMARTEES will use a large amount of existing data
and data collected specifically for the purpose of the study. This data needs
to be integrated, monitored, securely stored, and made available for analysis
within and beyond the project. Even though data collection, curation,
analyses, and exploitation will be conducted in WPs 4, 6, 7, and 8, WP1 will
have the overall responsibility to secure the compliance of the data
collection and handling with data protection laws (national and GDPR) and the
open data pilot regulations. WP1 has developed the DPM for SMARTEES – this
document (task 1.3), which will be updated in M14 and M26. All information
about data collection methodology and standards, data coding, referencing and
processing, and about the exploitation of the data during the project and
beyond is included in this DMP. Furthermore, WP1 will be responsible for
acquiring access to secondary datasets and engaging in legal agreements with
the providers of the secondary data.
## WP2 (EI-JKU)
A methodological cornerstone of SMARTEES is its research strategy to carry out
empirical work meeting the scientific excellence criteria for each of the
involved disciplines separately, but at the same time, defining a joint new
best practice methodology for cooperative multi-disciplinary policy research.
For this aim, EI-JKU (WP2 lead beneficiary) will prepare the common
theoretical framework for WP4-WP7 (task 2.1), which will be scrutinized and
refined in a subsequent public workshop (task 2.1) and interdisciplinary
research workshops on theory and methods (task 2.4). This will ensure a
thorough multi-method analysis of socio-economic and structural drivers of
five types of energy related social innovations in relation to smart
technology diffusion, energy behaviour, consumer engagement and prosumerism.
EI-JKU will also prepare the background for case study analysis (literature,
desk research), and ensure consistent data collection by defining the process
with which the theoretical findings can be fully considered in all data
collection activities (task 2.3).
## WP3 (K&I)
By thoroughly analysing five carefully selected case clusters that cover
different social innovations, European regions and cultures, and different
sub-groups driving the innovations, SMARTEES secures the policy relevance and
up-scalability of the findings. K&I (WP3 lead beneficiary) is responsible for
resuming contact with these actors, and preparing, for each reference and
supporting case, a plan of work with the correspondent timeline (task 3.1).
Task 3.2 will then prepare a description of the social dynamics characterizing
different cases, and a systematic analysis of each social innovation cluster
with its reference framework. For each case cluster, issues such as the
following will be examined: actors involved with their specific role; phases
of development of the social innovation, (how the social innovation
originated, how the social innovation has been consolidated, etc.); features
of each kind of actors’ participation and empowerment dynamics; local social,
economic, and environmental dynamics; local resources and assets; regulations;
incentives (including but not only economic); barriers, conflicts,
resistances, stress factors and negative effects in specific groups on one
hand and facilitating factors and success elements on the other, (including
acceptance and rejections dynamics); the co-existence of bottom-up and top-
down dynamics; and finally how the up-scaling process has worked and is
working. Sources of information will be available documents and data (previous
studies and research; assessments, plans of action and periodical reports
implemented in each case study; other relevant documents in each case studies;
social innovation cluster assessments; available datasets about the behaviour
of the actors, etc.); stakeholders’ interviews and direct observation through
“on site” visits. Task 3.2 will be responsible for planning and conducting the
interviews and visits. Interview protocols will be developed in Task 4.4 to
ensure their compatibility with the ABM methodology and each case partner’s
contact institution in SMARTEES is responsible for conducting the interviews.
To understand overall “prima facie” (i.e. before the implementation of the
survey – WP4 – and the draft of the scenarios – WP5) how social innovations in
energy transitions works “in action”, an overall analysis will be conducted
(Task 3.3). Based on the social innovation profiles and the overall analysis
of these, task 3.4 will identify knowledge gaps to be filled through the
surveys in WP4 (e.g. questions to be considered in the questionnaire survey),
and will propose a set of indicators to be considered in the preparation of
scenarios in WP5.
## WP4 (JH)
SMARTEES will make as much use of existing data as possible, though it is
anticipated that there is a need, on a case by case basis, to supplement these
existing data to provide additional insights. Therefore, the project
scrutinizes a large amount of existing and new data. The dedicated objective
of JH (WP4 lead beneficiary) is to collect existing and new data necessary for
WP 5-7 and make them available for further use also after the end of SMARTEES.
The tasks in WP4 are hence associated with coordinating requirements for new
data, and ensuring that all data, new and existing, are properly annotated
with metadata to enable their consistent application. Datasets we expect to
obtain access to, as a direct result of the collaboration with the case
cluster cities/regions, are results from previous citizen surveys, case
evaluations, energy use data, mobility patterns, socio-demographics in the
respective regions, consumption data, relevant local business indicators, etc.
Contracts, about the conditions under which these datasets will be used by the
partners in accordance to GDPR, will be made with the owners of the datasets.
JH will start by developing an initial schema for discussion with all
partners, along with a method for continual synthesis of new metadata into
that schema as the project progresses (task 4.1). The methods will be based on
the tools of knowledge elicitation and integration with the other project
partners as domain experts, such as telephone interviews, questionnaires, and
structured document analysis. The work done here will draw on experience in
the GLAMURS project developing a data integration protocol. This metadata
specification facilitates data integration among the various data sets used by
the project. WP4 will then provide a protocol for capturing sources of data to
ensure that all sources of data used by the project are sufficiently
documented, and that all information regarding raw data, metadata, conditions
of use, and any licences are recorded in such a way as to facilitate efficient
retrieval by members of the project consortium (task 4.2).
JH will also arrange a series of in-depth discussions with project partners,
and case-study teams to first generate a long list of potential requirements,
and then in discussion, distil this list into a realistic set of requirements
for primary data collection (task 4.3). Although the initial description of
work provided some indicative estimates for survey samples, the actual sample
size calculations will be performed within this task. The requirements of the
models, and the data needs of the case study partners themselves will be taken
into consideration during these discussions. Following from task 4.3, a
relatively short core questionnaire will be developed (task 4.5) that will
allow a level of comparability across all case studies surveyed. Beyond this
core, this task will develop a collaborative process whereby partners can
contribute to a battery of measures and items to be drawn on in task 4.6, i.e.
adaptation of questionnaire framework for each case cluster. In many cases the
constructs to be measured will be established and validated, but where this is
not the case, smaller pilot studies may be required to ensure a high level of
construct and content validity for any newly developed measure in the SMARTEES
project. Which questions are asked in each case study and to which types of
respondents (beyond the agreed core), will depend on requirements and context
of that case study. Details of the sample requirements for each questionnaire
survey will be developed following earlier discussions in task 4.3, and will
form part of the survey brief to be provided to the professional companies
subcontracted to undertake the empirical survey work. Each case study cluster
will be required to develop its own briefing document for this survey
subcontract. The lead partner for each case study cluster (i.e., K&I. NTNU,
UDC) will work with the overall coordinating partner for this task (JH) in
developing the sampling details, the questionnaire tool, and any data
management issues associated with each case. As the data from these surveys
will be fed into the agent-based models, the development of the questionnaire
will require detailed discussion with project partners intending to utilise
the data being generated by these surveys.
WP4 will as well develop detailed interview protocols depending on
requirements and context of each social innovation case study (task 4.4) to be
utilised in task 3.2. For example, it may be appropriate to use individual
indepth interviews in some cases, whereas in others, group interviews or
workshops may be deemed more appropriate. The development of the case study
interview protocols will be discussed with the agent-based modelling teams
(WP7) to ensure compatibility of thematic content with the requirements of the
models. The interviews themselves will be conducted under Task 3.2. The
interview data itself (recordings/transcripts) however, forms part of the
overall data of the project, and it therefore included in task 4.7 as a
subsection of the data collection. Data collection in each case study in a
subtask for each case will be led by the case responsible partner (i.e., K&I.
NTNU, UDC). The survey work will be subcontracted to professional survey
companies (including participant recruiting, printing, postage, data
processing), using the briefing document generated in task 4.6. Any further
data collection (e.g. additional empirical studies) will follow directly from
the specific requirements determined in task 4.3.
After collection of the primary and secondary data, JH will host a workshop
(task 4.8) to discuss issues such as ensuring proper interoperability for use
in the models and smooth transfer to task 6.4 with project partners (i.e.,
NTNU, K&I, UDC, UOT, UG). Based on the discussions at the workshop and
collaboration with task 6.4 and WP7, JH will produce data summaries for each
case cluster by carrying out initial exploratory data analysis (task 4.9), and
prepare a data report giving an integrated overview of the case studies as a
draft paper for submission to a journal (task 4.10).
## WP5 (UDC)
In the SMARTEES project, UDC (WP5 lead beneficiary) will develop policy
insights and future policy scenarios that are plausible, theoretically solid,
and empirically grounded on qualitative and quantitative data (task 5.3). This
task draws upon the theoretical framework developed in WP2 (tasks 2.2, 2.3)
and the empirical research insight on conducted case-studies (WP3, WP4 and
WP6). UDC is also responsible for establishing scenario logics, testing policy
alternatives and identifying tipping points to co-produce a set of dynamic
simulations of policy implementations for each case study involved through
first phase of multi-stakeholders deliberative workshops (task 5.4). Policy
scenarios will be co-created through iterative phases engaging policy and
local actors in reflexivethinking activities with SMARTEES researchers.
SMARTEES’ researchers will provide a formally represented model for each case-
study policy scenario, considering the interactions among actors and networks
within it and with its context. Following the integration of inputs from
participatory workshops and elaboration of realistic policy scenarios by
agent-based modelling techniques (task 5.5), SMARTEES researchers will prepare
and execute second phase of multi-stakeholders deliberative workshops for each
local case-study involved (task 5.6). In this task, SMARTEES will engage a
sample of citizens, consumers, social and business actors, including social
innovators to discuss forthcoming energy policy implementation.
## WP6 (NTNU)
WP6 (NTNU as lead beneficiary) will synthesize, structure, harmonize and
analyse primary and secondary data with a special focus on energy equality and
utilize them for an analysis of policy relevant social innovation drivers and
barriers, and a structured input for the agent-based modelling in WP7. In
doing so, WP6 is directly dependent on input from WP2 providing the
theoretical framework for the analyses in WP6, and WP4, providing the raw data
for the integration in WP6, and WP5 providing the scenarios for the analyses.
WP6 feeds then the synthesized data into the agent-based simulations in WP7
and the theory relevant implications of the results back into WP2.
Furthermore, WP6 feeds into WP8 by providing reports about the policy
implications of the analyses.
With input from task 2.2 and task 3.2, the first task in WP6 (i.e. task 6.1)
is to identify all relevant agent types, stakeholders and subpopulations (e.g.
household members from a high diversity of households, decision-makers in
energy companies, network utilities, building companies, city planners,
policymakers), and to document their characteristics and social networks
through social network analysis. The results of this task will feed into WP7.
Based on theoretical framework provide by WP2 and the agent types identified
in task 6.1, task 6.2 will then synthesize and analyse primary data and
secondary data provided by WP4 to identify the drivers and barriers for the
diffusion of the social innovations analysed in all case clusters. This task
will directly fed into the ABM in WP7. NTNU will be responsible for
identification of the decision-making mechanisms (task 6.3), which together
with relevant drivers and barriers identified in task 6.2 are crucial for a
realistic modelling of agent behaviour, based on the data delivered from WP4
and interviews with key stakeholders and insights from WP2 and WP3. The
results of this task will also directly feed into the programming of the
agents in WP7. Both the analysis of empirical data in tasks 6.1-6.3 and the
input from the simulations in WP7 will then be fed back to theory development
about social innovation diffusion in task 2.3 (task 6.5). WP6 will also
integrate data, monitor the process of data generation and model development
in WPs 2-4 with a special focus of providing the ABM in WP7 with the necessary
values for model parameterization, calibration, and validation based on data
synthesized and pre-analysed in WP4 (task 6.4).
## WP7 (UG)
UG (WP7 lead beneficiary) will be responsible for defining the structural
requirements of the data to enhance compatibility between the modelling
infrastructure and all data collection and curation activities in WP4 (task
7.2). UG will then parameterise the SMARTEES agent-based social simulation
model, which is formalized in task 7.3, for selected cases using a variety of
data from the case studies integrated by WP6 in task 7.4. In this iterative
process, the selected case models will be calibrated to create realistic
simulations of the social innovation processes in each case.
## WP8 (ICLEI)
In the SMARTEES project ICLEI (WP8 lead beneficiary) will provide a group of
follower cities, which is committed to learning from the experiences of the
SMARTEES reference cases, direct access and first-hand information in a
confidential setting. During the course of the project, these follower cities
are invited to two field visits/ workshops to the main and/or supporting
reference case and three online training sessions (webinar or similar format).
The feedback from these follower cities will provide input for the simulation
tool and datasets as exploitation material in the SMARTEES. Specifically, for
each of the five case study clusters, one follower city will be selected for
further in-depth analysis aimed at preparing this city for implementing the
social innovation themselves. Furthermore, as of task 8.5, workshops utilizing
the pro version of the SMARTEES policy sandbox IT tool, will be developed and
conducted in all 10 participating case cities/regions, 1 in Brussels for
policymakers at the EU level, and in 5 follower cities (1 per cluster).
# DATA MANAGEMENT
This chapter describes the procedures applied in SMARTEES for the different
steps of data collection, management, storage, and publication in detail.
## Formal ethics approval
SMARTEES involves human participants, data collection and processing, and
involvement of a non-EU country, which raises ethical issues. The SMARTEES
consortium will comply with all relevant national, European and international
ethical regulations and professional codes of conduct. All partners will also
conform to Horizon 2020 ethical guidelines, including General Data Protection
Regulation, “Data protection and privacy ethics guidelines”, the “Guidance for
Applicants on Informed Consent”, and national regulations. Table 3 identifies
the data protection officer or national agency responsible for the approval of
every data collection activity or inquiry for secondary data.
Their opinion will be guiding for how SMARTEES handles the different types of
data.
### Involvement of human participants
There are three sub-points related to the involvement of human participants to
be managed:
1. Details on the procedures and criteria that will be used to identify/recruit research participants must be provided.
2. Detailed information must be provided on the informed consent procedures that will be implemented for the participation of humans.
3. Templates of the informed consent forms and information sheet must be submitted on request.
###### Details on the procedures and criteria that will be used to
identify/recruit research participants
Participants to the primary quantitative data surveys will be recruited from
already registered members of national or local web-panels or specifically
recruited by the subcontracted survey companies. The participants will be
sampled to be representative for each case. Participants are 18 years or older
and must be able to give informed consent. They will be informed about the aim
of the study, the collected data, the aim of connecting the survey data with
secondary datasets (such as for example energy consumption data) via a
pseudonymized key table, data handling, storage and pseudonymization and
anonymization procedures as well as publication of the anonymized data and its
inclusion in the Open Data Pilot. This will be done – in accordance to GDPR –
in simple language, easy to understand for the participants. In cases the data
collection will be conducted online, by following the link to participate they
explicitly give their consent to participate. In cases the data collections
are conducted in personal interviews the participants will give written
consent (templates for the consent/information sheet see point 3). If
participants are recruited from existing panels, they will earn points in the
point system of their panel operator as reward for their participation. If
they are recruited specifically for this study, they will participate in a
lottery of rewards for their participation. Participants are also informed
that they can withdraw their consent until the data is anonymized without any
disadvantages and without having to give a reason. A contact (telephone and
e-mail) will be given where they can request to be informed about all data
that is stored about them in the project. From the point of anonymizing the
data is no longer personal information.
For the other empirical studies like in-depth interviews and focus groups,
information is presented in written form when participants are recruited. It
will be repeated immediately before the data collections are started and the
consent form is signed by the participant before the interview. Participants
are also informed that they can withdraw their consent until the data is
anonymized without any disadvantages and without having to give a reason. A
contact (telephone and e-mail) will be given where they can request to be
informed about all data that is stored about them in the project. From the
point of anonymization the data is no longer personal information.
Participants of this part of the empirical work will be recruited locally from
the general population older than 18 years and from experts in the fields
studied. Only participants will be recruited that are able to give informed
consent and information sheets and consent forms will be designed according to
the templates provided by the Norwegian Center for Research Data. Participants
will be recruited through mailing lists, newspaper advertisement, snowball
systems, posters, or the like. Expenses that they have for participating will
be reimbursed.
Table 3: Data protection officer or national agency in each task
<table>
<tr>
<th>
**Name of task**
</th>
<th>
**Task lead partner**
</th>
<th>
**Data protection officer / National agency** (responsible for the approval of
data collection/use/inquiry)
</th> </tr>
<tr>
<td>
T 1.3 Overall data management
</td>
<td>
NTNU
</td>
<td>
Norwegian Centre for Research Data (NSD)
http://www.nsd.uib.no/nsd/english/index.html
</td> </tr>
<tr>
<td>
T 2.4 Interdisciplinary research workshops
</td>
<td>
EI-JKU
</td>
<td>
Marie Holzleitner
</td> </tr>
<tr>
<td>
T 3.1 Case-studies main actors involvement
</td>
<td>
K&I
</td>
<td>
Giovanna Murari
</td> </tr>
<tr>
<td>
T 3.2 Profiles of types of social innovation
</td>
<td>
K&I
</td>
<td>
Giovanna Murari
</td> </tr>
<tr>
<td>
T 3.3 Overall analysis of social innovation
</td>
<td>
K&I
</td>
<td>
Giovanna Murari
</td> </tr>
<tr>
<td>
T 3.4 Models of social innovation
</td>
<td>
K&I
</td>
<td>
Giovanna Murari
</td> </tr>
<tr>
<td>
T 4.1 Metadata specification
</td>
<td>
JH
</td>
<td>
Doug Salt, Katherine McBay, Gary Polhill, Tony Craig
</td> </tr>
<tr>
<td>
T 4.2 Data curation
</td>
<td>
JH
</td>
<td>
Doug Salt, Katherine McBay, Gary Polhill, Tony Craig
</td> </tr>
<tr>
<td>
T 4.3 Identifying requirements for primary data
</td>
<td>
JH
</td>
<td>
Katherine McBay, Gary Polhill, Tony Craig
</td> </tr>
<tr>
<td>
T 4.4 Development of interview protocols
</td>
<td>
UDC
</td>
<td>
Adina Dumitru/Amparo Alonso
</td> </tr>
<tr>
<td>
T 4.5 Development of the questionnaire framework for the surveys
</td>
<td>
JH
</td>
<td>
Katherine McBay, Gary Polhill, Tony Craig
</td> </tr>
<tr>
<td>
T 4.6.1 Adaptation of questionnaire framework for Case Cluster 1
</td>
<td>
K&I
</td>
<td>
Giovanna Murari
</td> </tr>
<tr>
<td>
T 4.6.2 Adaptation of questionnaire framework for Case Cluster 2
</td>
<td>
NTNU
</td>
<td>
Norwegian Centre for Research Data (NSD)
</td> </tr>
<tr>
<td>
T 4.6.3 Adaptation of questionnaire framework for Case Cluster 3
</td>
<td>
NTNU
</td>
<td>
Norwegian Centre for Research Data (NSD)
</td> </tr>
<tr>
<td>
T 4.6.4 Adaptation of questionnaire framework for Case Cluster 4
</td>
<td>
UDC
</td>
<td>
Adina Dumitru/Amparo Alonso
</td> </tr>
<tr>
<td>
T 4.6.5 Adaptation of questionnaire framework for Case Cluster 5
</td>
<td>
JH
</td>
<td>
Katherine McBay, Gary Polhill, Tony Craig
</td> </tr>
<tr>
<td>
T 4.7.1 Data collection in Case Cluster 1
</td>
<td>
K&I
</td>
<td>
Giovanna Murari
</td> </tr>
<tr>
<td>
T 4.7.2 Data collection in Case Cluster 2
</td>
<td>
NTNU
</td>
<td>
Norwegian Centre for Research Data (NSD)
</td> </tr>
<tr>
<td>
T 4.7.3 Data collection in Case Cluster 3
</td>
<td>
NTNU
</td>
<td>
Norwegian Centre for Research Data (NSD)
</td> </tr>
<tr>
<td>
T 4.7.4 Data collection in Case Cluster 4
</td>
<td>
UDC
</td>
<td>
Adina Dumitru/Amparo Alonso
</td> </tr>
<tr>
<td>
T 4.7.5 Data collection in Case Cluster 5
</td>
<td>
JH
</td>
<td>
Katherine McBay, Gary Polhill, Tony Craig
</td> </tr>
<tr>
<td>
T 4.8 Data analysis workshop
</td>
<td>
JH
</td>
<td>
Katherine McBay, Gary Polhill, Tony Craig
</td> </tr>
<tr>
<td>
T 4.9 Preliminary data analysis
</td>
<td>
JH
</td>
<td>
Katherine McBay, Gary Polhill, Tony Craig
</td> </tr>
<tr>
<td>
T 4.10 Integrated data report
</td>
<td>
JH
</td>
<td>
Katherine McBay, Gary Polhill, Tony Craig
</td> </tr>
<tr>
<td>
T 5.1 Developing a common criteria list for defining policy scenarios
</td>
<td>
UDC
</td>
<td>
Adina Dumitru/Amparo Alonso
</td> </tr>
<tr>
<td>
T 5.2 Building a common framework for the development of local-embedded future
policy scenarios
</td>
<td>
UDC
</td>
<td>
Adina Dumitru/Amparo Alonso
</td> </tr>
<tr>
<td>
T 5.3 Definition of future policy scenarios
</td>
<td>
UDC
</td>
<td>
Adina Dumitru/Amparo Alonso
</td> </tr>
<tr>
<td>
T 5.4 Exploration of future policy scenarios through multi-stakeholders
deliberative workshops
</td>
<td>
UOT
</td>
<td>
IT analyst Oana Velcota, IT&C Department,
UVT
</td> </tr>
<tr>
<td>
T 5.5 Integration of inputs from participatory workshops and elaboration of
realistic policy scenarios to be tested by Agent-based Modelling techniques
</td>
<td>
UDC
</td>
<td>
Adina Dumitru/Amparo Alonso
</td> </tr>
<tr>
<td>
T 5.6 Refinement phase: Analysis of energy future scenarios and transforming
them into strategic interventions
</td>
<td>
UOT
</td>
<td>
IT analyst Oana Velcota, IT&C Department,
UVT
</td> </tr>
<tr>
<td>
T 5.7 Elaboration of policy recommendations and guidelines for the
implementation and assessment of new locally embedded low carbon policies
</td>
<td>
UDC
</td>
<td>
Adina Dumitru/Amparo Alonso
</td> </tr>
<tr>
<td>
T 6.1 Identifying stakeholders and subpopulations and their social networks
</td>
<td>
UOT
</td>
<td>
IT analyst Oana Velcota, IT&C Department,
UVT
</td> </tr>
<tr>
<td>
T 6.2 Identifying drivers of and barriers towards social innovation
</td>
<td>
UOT
</td>
<td>
IT analyst Oana Velcota, IT&C Department,
UVT
</td> </tr>
<tr>
<td>
T 6.3 Analysing decision-making mechanisms of actors
</td>
<td>
NTNU
</td>
<td>
Norwegian Centre for Research Data (NSD)
</td> </tr>
<tr>
<td>
T 6.4 Integrating data and providing the input for
ABM in WP7
</td>
<td>
UOT
</td>
<td>
IT analyst Oana Velcota, IT&C Department,
UVT
</td> </tr>
<tr>
<td>
6.5 Scrutinizing the results of the simulations for theory development
</td>
<td>
NTNU
</td>
<td>
Norwegian Centre for Research Data (NSD)
</td> </tr>
<tr>
<td>
T 6.6 Energy equality through social innovation
</td>
<td>
UOT
</td>
<td>
IT analyst Oana Velcota, IT&C Department,
UVT
</td> </tr>
<tr>
<td>
T 7.4 Parameterising selected cases in model
</td>
<td>
UG
</td>
<td>
Wander Jager
</td> </tr>
<tr>
<td>
T 7.5 Experimentation with simulated scenarios in selected cases
</td>
<td>
ICLEI
</td>
<td>
Wolfgang Teubner
</td> </tr>
<tr>
<td>
T 8.1 Developing and implementing a communication and dissemination strategy
</td>
<td>
ICLEI
</td>
<td>
Wolfgang Teubner
</td> </tr>
<tr>
<td>
T 8.3 Follower city network
</td>
<td>
ICLEI
</td>
<td>
Wolfgang Teubner
</td> </tr>
<tr>
<td>
T 8.4 Project video blog
</td>
<td>
ICLEI
</td>
<td>
Wolfgang Teubner
</td> </tr>
<tr>
<td>
T 8.5 Developing and programming policy sandbox
tool
</td>
<td>
ICLEI
</td>
<td>
Wolfgang Teubner
</td> </tr>
<tr>
<td>
_T 9. Compliance with ethics requirements_
</td>
<td>
_NTNU_
</td>
<td>
_Norwegian Centre for Research Data (NSD)_
</td> </tr> </table>
###### Informed consent procedures
Before participation of an online survey, panel members will be invited to
participate in the survey via e-mail. In the e-mail the information from the
informed consent form will be presented and a link to the survey will be
included. The participants will be instructed that by clicking the link they
consent to participate in the study as described in the information included
in the mail. A contact (telephone and e-mail) will be given where they can
request to be informed about all data that is stored about them in the
project. From the point of anonymization the data no longer includes personal
information.
To the extent that interviews are conducted by project researchers,
respondents will be informed prior to being interviewed and are asked for
their consent. The principles of written informed consent will be applied.
Their participation in research activities (e.g., interviews, focus groups) is
entirely voluntary. They may give notice of their withdrawal from research
activities at any time. Participants are also informed that they can retract
their consent until the data is anonymized without any disadvantages and
without having to give a reason
###### Informed consent forms and information sheet
The information sheets and consent forms will be based on the standard form
provided by the Norwegian Centre for Research Data (NSD), which is the
Norwegian organization acting as Data Protection Office in regard to GDPR for
social science research organizations, and will be in line with national
regulations. Future updates of the DMP will include a documentation of all
local consent forms/information sheets used in SMARTEES (see _**Appendix II**
_ )
### Data collection and Processing
There are three sub-points related to Data collection and Processing
1. Copies of opinion or confirmation by the competent Institutional Data Protection Officer and/or authorization or notification by the National Data Protection Authority must be submitted (which ever applies according to the GDPR and the national law).
2. If the position of a Data Protection Officer is established, their opinion/confirmation that all data collection and processing will be carried according to EU and national legislation, should be submitted.
3. Detailed information must be provided on the procedures that will be implemented for data collection, storage, protection, retention and destruction and confirmation that they comply with national and EU legislation.
###### Copies of opinion (sub-points 1 and 2)
All partners that collect and process data have confirmed they will do so
according to the GDPR and the national law. All confirmations by the
institutional or national data protection officers regarding the conduction of
data collections in SMARTEES in accordance with EU and national legislation
will be collected in _**Appendix III** _ of the DMP as soon as they are
received. They will be provided at the earliest possible time point prior to
starting the data collections.
**Information on the procedures for data collection, storage, protection,
retention and destruction.**
This document (D1.2), The Work Plan (D1.1) as well as the Project Handbook
(D1.3) provides this information. All procedures are in accordance the
principles of the “Guidelines on Data Management in Horizon 2020”.
###### Further data collection and procession standards
Procedures in this section, on SMARTEES data collection, storage, protection,
retention and destruction, are based on the “Guidelines on Data Management in
Horizon 2020”.
The data collection, storage, protection, retention and destruction will be
conducted according to National and EU legislation (GDPR), as all partners
dealing with data have declared. All partners have identified a Data
Protection Officer (see table above). SMARTEES includes different forms of
data collection (primary data survey based on panels by professional survey
providers, surveys with specifically recruited participants, interviews, focus
groups, on site observations, workshops, discussions, and secondary data
sources). These datasets need in some cases to be linked by a pseudonymized
key table to be able to identify the same unit of analysis (e.g. a respondent
or a household). The main principles for all types of data collections is to
pseudonymize the datasets at the earliest possible point in the process by
separating the identifying information from the rest of the data and store the
identifying information in a key table (the pseudonyms) separately from the
rest of the data. This key table will be stored at the level of the research
partner responsible for supporting the respective case. The key tables will
not be shared with other partners in the project or externals. This personal
data will be stored under strictest precautions using for example a service
for storing sensitive data such as provided by the University of Oslo 2 or
an equivalent service being established by NTNU at the moment, which is
approved for storing sensitive data. It will under no circumstances be made
public or shared with the other project partners. The conducting party will
ensure that data collection and pseudonymization will comply with EU
legislation and the SMARTEES project handbook. Furthermore, data produced in
one WP will always be pseudonymized before it is shared with the other WPs.
Consent is sought before storing collected material in electronic form. Data
collection, processing and storage will comply with the GDPR and national
rules of each of the other countries where interviews or surveys are to take
place. At the end of the project, the key tables will be deleted and the
pseudonymized data will be fully anonymized by deleting the pseudonyms and
indirectly identifying information. The latter will be done by checking if
combinations of for example sociodemographic information leads to groups with
unique combinations smaller than three units (e.g., respondents or
households), in which case sociodemographic data will be aggregated to larger
categories until no group with less than three units exists.
In more detail, the main quantitative survey data will be pseudonymized when
it is received from the survey companies by SMARTEES. SMARTEES will only
collaborate with survey companies which can document the clearance by all
national and EU legislative bodies relevant for the study and their procedures
for the GDPR. The contracts with the survey companies will specify the consent
procedures (only panel members who have given their consent will be recruited,
it will be explicitly informed that clicking the link to the online survey
equals consent to participate, this consent can be retracted at any point
until the data is anonymized). The data set will be pseudonymized by removing
all directly or indirectly identifying information and storing it in a key
table. After the end of the project, the survey company will delete the key
table.
All informants and subjects in interviews, focus groups and workshops will
give a written informed consent to their participation, either by receiving
and signing a _consent and information_ letter or email, ensuring they
understand what the consent is concerning and what the consequences of
participation will be. This will explicitly include information about which
secondary data is processed about the same informant and linked to the answers
in the surveys or interviews and how the procedures for pseudonymizing the
data are. Identifying information will be stored separated from the study
data. After the project has ended, the pseudonyms and the key tables will be
deleted, thus fully anonymizing the data. Video and audio material from
research interview sessions will be deleted after transcription and quality
control. Besides those new data collection activities, SMARTEES will also
reanalyse existing data sources, such as datasets provided by the case cluster
cities. Conditions for use of this data in relation to GDPR will be fixed in
use licences or contracts (WP1 coordinates this work), also regulating how the
secondary data and the primary datasets will be linked through key tables.
All raw data will be stored and protected on SMARTEES encrypted servers for
secure data storage that meet the Norwegian Act relating to Personal Data
Filing Systems. Personal data directly linking person and data will always be
kept separate from the datasets. Identifying personal data will be retained
for a maximum of 1 year after collection completion to allow for thorough
quality control. All such data will thus be deleted by January 2021 the
latest.
<table>
<tr>
<th>
**Procedures for data protection**
</th> </tr>
<tr>
<td>
* The project will follow the principle of “data minimization” by restricting acquisition of primary or secondary data to the absolute necessary for the project
* Data will be pseudonymized at the earliest possible point in time where linking of datasets is necessary, otherwise data will be anonymized as early as possible.
</td> </tr>
<tr>
<td>
</td>
<td>
Pseudonymization will be conducted by the partners responsible for a
respective case by separating identifying information from the rest of the
data and storing it in a file, separated from the other data. Anonymization
will also be conducted by the same partners, but here no identifying
information will be retained.
</td> </tr>
<tr>
<td>
</td>
<td>
Personal data in key tables will be stored under strictest precautions and
under no circumstances made public
</td> </tr>
<tr>
<td>
</td>
<td>
Data will always be pseudonymized before it is shared with the other partners
</td> </tr>
<tr>
<td>
</td>
<td>
Consent is sought before storing collected material in electronic form
</td> </tr>
<tr>
<td>
</td>
<td>
When the data collected through interviews generates personal information,
this data will be pseudonymized during transcription
</td> </tr>
<tr>
<td>
</td>
<td>
Any text produced based on research results will respect the principle of
privacy and anonymity, not identifying any informants directly or indirectly
without their explicit consent
</td> </tr>
<tr>
<td>
</td>
<td>
All secondary data sources shall contain already pseudonymized datasets where
linking them with other data is necessary, anonymized secondary dataset will
the acquired wherever possible.
</td> </tr>
<tr>
<td>
</td>
<td>
Licences and contracts for use of secondary data will be made, specifying
their use in relation to the GDPR
</td> </tr>
<tr>
<td>
</td>
<td>
Activities carried out outside the EU will be executed in compliance with the
legal obligations in the country where they are carried out and the EU
</td> </tr>
<tr>
<td>
</td>
<td>
The activities must also be allowed in at least one EU Member State
</td> </tr>
<tr>
<td>
</td>
<td>
All data transferred between project partners (within or outside the EU) will
be restricted to pseudonymized or anonymized data and transfer will only be
made in encrypted form via secured channels
</td> </tr>
<tr>
<td>
</td>
<td>
At the end of the project, all data will be anonymized before making it
available for open access. The pseudonyms and according key tables will be
deleted at the end of the project.
</td> </tr>
<tr>
<td>
</td>
<td>
Since Agent Based Models often are geographically explicit, the visual
representation of the data will be provided in a form that does not allow
identifying individual analysis units (e.g., households), for example by
dividing the region of analysis in sections which contain at least three
response units and placing these units randomly in the section in the visual
output.
</td> </tr> </table>
### Involvement of non-EU countries
SMARTEES’ non-EU partner (NTNU - the coordinator) has confirmed that the
ethical standards and guidelines of Horizon2020 will be rigorously applied,
regardless of the country in which the research is carried out. Activities
carried out outside the EU will be executed in compliance with the legal
obligations in the country where they are carried out, with an extra condition
that the activities must also be allowed in at least one EU Member State.
In SMARTEES data will be transferred between the named non-EU country (Norway)
and countries in the European Union to allow for joined analyses and storage
of all data in the common database. All data transferred between project
partners (within or outside the EU) will be restricted to pseudonymized or
anonymized data and transfer will only be made in encrypted form via secured
channels.
Two partners (JH and ACC) are located in the UK, which at the time of start of
the project is an EU country. Should this status change throughout the project
period, the same rules as for the Norwegian partner will apply.
### Survey questionnaire for each case cluster
The SMARTEES survey effort is coordinated and administrated in WP4, while the
process with which the theoretical framework in all data collection
activities, scientific design of the questionnaires survey (i.e., questions
and items to be considered) and structural requirements of the data collection
are defined in WP2, WP3 and WP7. First, a relatively short core questionnaire
will be developed that will allow a level of comparability across all case
studies surveyed. Later, tailored questionnaires for each case cluster,
whereby partners can contribute to a battery of measures and items, will be
developed by the lead partner for each case study cluster, who will work with
the overall coordinating partner for the survey questionnaire in developing
the sampling details, the questionnaire tool, and any data management issues
associated with each case. Data collection in each case study in a subtask for
each case will be led by the case responsible partner. The survey work will be
subcontracted to a professional survey company (including participant
recruiting, printing, postage, data processing), using the briefing document
that is developed for each case study cluster. A professional company
subcontracted to undertake the empirical survey work is needed to a) translate
the questionnaire into local languages, b) program the questionnaire in an
adequate online-survey tool if necessary, c) recruit survey participants
(according to the selection criteria defined by the consortium) and d) send
out the questionnaire to participants. Emphasis is put on hiring companies
that specialize in the above-mentioned tasks and that have a proven record of
accomplishment of multi-national efforts to ensure the highest quality data
material. We will collect several quotes for the SMARTEES survey service, and
choose the one offering the best value for money.
Companies in question must document their routines and procedures for data
management, sensitive information and personal data – and thus prove that all
data collection services and processing will be carried out according to the
EU GDPR and national legislation. The chosen company will transfer
pseudonymized data to the SMARTEES project server through a secure file
transferring system.
## Data collection procedures
In order to achieve quality assurance, quality control and consistency
throughout the project; specific data collection procedures will be added to
the DMP as they are developed by the involved partners ahead of the different
data collections. All procedures will be developed to meet general scientific
quality criteria for data collections 3 as indicated in the following table:
<table>
<tr>
<th>
**Quality standards**
</th> </tr>
<tr>
<td>
**Accuracy**
Is the data collected correct and complete? Are the data entry procedures
reliable?
**Efficiency**
Are the resources used to collect data the most economical available to
achieve those objectives?
**Effectiveness**
Have the objectives been achieved?
Have the specific results planned been achieved?
**Feasibility and timeliness**
Can data be collected and analysed cost effectively? Can it provide current
information in a timely manner?
**Relevance**
What is the relevance of the data/information/evidence for primary
stakeholders?
Is data collection compatible with other efforts? Does it complement,
duplicate or compete?
**Security**
Is the confidentiality ensured?
**Utility**
Does data provide the right information to answer the questions posed?
</td> </tr> </table>
### Document and record study
The document and record (data) study procedure will be developed ahead of the
data collection and documented in an updated DMP. The first steps in this
respect are identifying the need for data in the different cases and the
availability of secondary data.
### Individual in-depth interviews
Interview data procedure will be developed ahead of the data collection and
documented in an updated DMP.
### Case study observations
Case study procedure will be developed ahead of the data collection and
documented in an updated DMP.
### Questionnaire surveys
The questionnaire survey procedures will be developed ahead of the data
collection and documented in an updated DMP.
### Focus group interviews
Focus group data collection procedure will be developed ahead of the data
collection and documented in an updated DMP.
### Discussions
Discussion event data collection procedure will be developed ahead of the data
collection and documented in an updated DMP.
### Workshops
Workshop data collection procedure will be developed ahead of the data
collection and documented in an updated DMP.
## Data documentation
All collected data shall include a metafile when stored on the SMARTEES secure
storage solution and/or the SMARTEES SharePoint server. The file will later be
made available for external users of the data. This metafile shall explain the
kind of data included, involved personnel, date and duration of the data
collection, variable names/labels, recruiting procedures, response rates,
whether or not it is pseudonymized or anonymized, related WPs and tasks, and
finally a summary. _**Appendix IV** _ provides two templates for qualitative
and quantitative data sets that will be adapted during the course of SMARTEES,
especially after the initial activities in WP 4, responsible for the curation
of datasets and providing their metadata. The templates might be adapted
according to the needs identified in WP4.
## Data storage and curation
All personal data will be stored and protected on SMARTEES encrypted server
space for secure data storage, described in 4.4.1. Pseudonymized and
anonymized data will be stored in the SMARTEES SharePoint solution described
in 4.4.2 in encrypted form (see 4.4.3). WP1 (NTNU) and WP4 (JH) are
responsible for the curation of all data collected in SMARTEES and its safe
storage. The storage solutions for personal raw-data, pseudonymized and
anonymized data include daily backup routines to prevent data loss. All data
files will be assigned a persistent and unique Digital Object Identifier (DOI)
by the end of the project through services such as Figshare, Zenodo or Dryad.
### Personal raw-data and pseudonym key tables
The non-anonymized raw-data and the key tables for pseudonymized data will be
stored on secure server solutions, such as the one hosted and operated by the
University of Oslo (UiO) and their _Services for sensitive data_ (TSD) (
_https://www.uio.no/english/services/it/research/storage/sensitive-data/_ ) ,
which complies with the Norwegian regulation regarding individual privacy.
This secure storage solution is NTNUs standard for sensitive data (NTNU is at
the moment establishing a similar internal service, so it might be, SMARTEES
will use NTNU’s own service instead). Each contact partner for the cases is
responsible for storing these data and pseudonymizing the datasets for the
work in SMARTEES. SMARTEES will establish these server spaces as soon as first
data is produced. Our server solutions will comply with the regulations set by
the TSD. Backup is performed through UiO`s regular backup system with the
addition of encryption. The encryption key is only available on the dedicated
terminal server with a copy stored in safes on two separate locations. Data
transfers (import/export) to and from the services is handled by a special
purpose file staging service and the project administrator controls access
rights for all project members. By default, all project members are able to
transfer data in, but only the project administrator can do a data transfer
out. For security reasons the TSD infrastructure is accessible only with a
2-factor login, i.e. the username, password and electronically generated
secure code (like in internet banking). Connecting to the system is first done
by accessing a login server via an encrypted SSH tunnel. From the login server
users will connect to project VMs via PCoIP (Windows)/Thinlinc (Linux). The
login procedure requires a one-time password that you get from a
smartphone/yubikey. User guides are available at the TSD homepage 4 . The
folder structure on the server will be based on the SMARTEES case structure,
and separate folders for each case and WP. Datasets that belong to multiple
WPs will be stored in sub-folders of WP1 for the cases.
### Pseudonymized data
In cases where the research questions of SMARTEES make it necessary to link
several datasets (e.g., primary survey data and secondary data such as energy
consumption data), this linking of datasets will be conducted through
pseudonyms (e.g., a randomly generated participant code). A table matching the
pseudonyms with identifying information (e.g., names or e-mail addresses) will
be produced by the SMARTEES partner responsible for the respective case. All
identifying information will be replaced in the dataset by the pseudonyms. The
key table will be stored separately from the dataset on the secure server
solution as described under 4.4.1. At the end of the project, the key tables
will be deleted and the data fully anonymized.
Pseudonymized data will be stored at the SMARTEES SharePoint solution in
encrypted and password-protected form (see Section 4.4.4). SMARTEES’ partners
have access to this solution through personal logins provided by NTNU. The
overall folder structure is based on the SMARTEES WP and case structure; each
WP folder includes a data sub-folder and these will include folders for the
specific kinds of data produced.
### Anonymized data
All data collection and processing that will be done during SMARTEES will be
carried out according to national legislation and the EU General Data
Protection Regulation (Regulation (EU) 2016/679). The consortium and the
partners are responsible for following the ethical procedures in their
respective countries (see Section 4.1).
In cases where linking of datasets is not necessary, the data will be
anonymized at the earliest possible point in time. This will be done by
deleting directly identifying information from the file and aggregating
indirectly identifying information such as sociodemographic data to a level,
where no unique combination of such data identifies groups of less than three
respondents. At the end of the project, all pseudonymized data will be
anonymized and the key tables will be deleted.
Anonymized data will be stored at the SMARTEES SharePoint solution in
encrypted and password-protected form
(see Section 4.4.4). SMARTEES’ partners have access to this solution through
personal logins provided by NTNU.
The overall folder structure is based on the SMARTEES WP and case structure;
each WP folder includes a data sub-folder and these will include folders for
the specific kinds of data produced.
### Encryption standards and procedures
All data files will be transferred via secure connections and in encrypted and
password-protected form (for example with the open source 7-zip tool providing
full AES-256 encryption: _http://www.7-zip.org/_ _or the encryption options_
_implemented in MS Windows_ ). Passwords will not be exchanged via e-mail but
in personal communication between the partners. The encryption solutions will
be chosen in accordance with the SMARTEES partners’ IT supports.
### File name standards
To ensure that data files as well as any other file in SMARTEES has a clear
name identifying the content, the following file name standards are used. All
documents shall be numbered by their type of document, and the assigned
subsequent numbering within each WP (first deliverable of WP1: **D1.1,** first
deliverable of WP 2: **D2.1** ).
**XXX** : Identifies which main category the document belongs to. In order to
always easily identify the files, the project name **SMARTEES-** shall be
included as a prefix to all document categories.
**YYY** : Will always be a number assigned subsequently for each new doc in
the XXX category and WP. **ZZZ** : Issue number
<table>
<tr>
<th>
**XXX**
</th>
<th>
**XXX explanation**
</th>
<th>
**YYY**
</th>
<th>
**ZZZ**
</th> </tr>
<tr>
<td>
D
</td>
<td>
Deliverable
</td>
<td>
1.1, 1.2, 2.1, 2.2 etc.
</td>
<td>
1,2,3,etc.
</td> </tr>
<tr>
<td>
MAN
</td>
<td>
Management
</td>
<td>
1.1, 1.2, 2.1, 2.2 etc.
</td>
<td>
1,2,3,etc
</td> </tr>
<tr>
<td>
DAT
</td>
<td>
Data files
</td>
<td>
1.1, 1.2, 2.1, 2.2 etc.
</td>
<td>
1,2,3,etc
</td> </tr>
<tr>
<td>
DOC
</td>
<td>
Data documentation
file
</td>
<td>
1.1, 1.2, 2.1, 2.2 etc.
</td>
<td>
1,2,3,etc
</td> </tr>
<tr>
<td>
NOT
</td>
<td>
Notes
</td>
<td>
1.1, 1.2, 2.1, 2.2 etc.
</td>
<td>
1,2,3,etc
</td> </tr>
<tr>
<td>
MOM
</td>
<td>
Minutes of meeting
</td>
<td>
1.1, 1.2, 2.1, 2.2 etc.
</td>
<td>
1,2,3,etc
</td> </tr>
<tr>
<td>
PRE
</td>
<td>
Presentations
</td>
<td>
1.1, 1.2, 2.1, 2.2 etc.
</td>
<td>
1,2,3,etc
</td> </tr>
<tr>
<td>
PAP
</td>
<td>
Journal paper manuscript
</td>
<td>
1.1, 1.2, 2.1, 2.2 etc.
</td>
<td>
1,2,3,etc
</td> </tr> </table>
The file-name shall always consist of: document number, document title and
issue (in this order). Underscore shall be used between document number, issue
number and document title. There shall be no open spaces in the document
title. Logical short versions of words can be used in the document title part
of the filename in order to shorten the filename. If the document is a draft
version, this is indicated by “DR” after Issue# and underscore.
Example (this document, first issue): **SMARTEES-D1.2_DMP_1**
## Open access to SMARTEES Data
An open access database (e.g., Zenodo or EUDAT) will be used to grant access
to anonymized quantitative and qualitative data after the project is ended.
Data based on the empirical results from this project will be stored in this
database (e.g., data and information collection via a web-based survey(s),
workshops, interview, site visits etc.). The researchers have a duty of
transparency to fully inform how the data will be used and to what purpose the
data is for. Thus, the ethically compliant data collection will be guided by
proportionality and follow the legal safeguards (described in WP9) to minimize
any risks related to unauthorized release of personal and private information.
The empirical work of WP3 and WP4 will be coordinated (for data management
issues) by WP1 and then transferred into the database created by WP4
(qualitative and quantitative interview data). Furthermore, SMARTEES will in
WP8 provide a policy sandbox tool utilizing the data curated and collected in
SMARTEES. Secondary data will only be made available to open access if the
licensing with the owners of the data allows for that.
## Deletion of data
Identifying personal data and key tables will be retained for a maximum of 1
year after collection completion to allow for thorough quality control. All
such data will thus be deleted by January 2021 the latest. At this point all
data will be anonymized. Anonymized data will not be deleted but stored and
made available for future use through the Open Data Pilot.
## Dissemination and exploitation of the data
Although the dissemination and exploitation strategy of the project is yet
under development and will be published through deliverables D8.1, D8.4 and
D8.5, different type of data or data packages are likely to be used for
dissemination and exploitation purposes:
* _The policy sandbox IT tool_ : a policy sandbox allowing a realistic prospective analysis of existing and future policy and market incentive scenarios with users on the case level and in policy-makers workshops on the European level is one of the major achievements in SMARTEES. This is an IT-based tool to allow policymakers to test policy scenarios on the simulated populations in the SMARTEES agent-based models. The tool will be provided in two versions: A light version (SMARTEES toolbox light), which will be available on the internet to allow manipulation of predefined scenarios based on the work done in WP5, WP6 and WP7, exploring the boundary conditions of these scenarios. A more powerful professional version of the tool allowing for more fundamental variations in the policy scenarios will be programmed as a tool that is provided in a package together with a scenario analysis workshop (SMARTEES toolbox pro).
* _Data in a public database (see Section 4.5)_ : the data in the public database is expected to be one of the main exploitable results of the SMARTEES project. The selected database will be open source and operate under the regulations of the pilot on open research data in H2020. This open access character’s compatibility with SMARTEES’ exploitation strategy will be analysed during the project (WP8) and restrictions to open access use will be kept at a minimum and only implemented if strong exploitation benefits stand against or is restricted by licences on secondary data. In any case, the procedures for data management set in this document, particularly regarding WP4, will be followed and respected.
## Visual representation of data
Whenever relevant for research questions to represent the “neighbourhoods”
present in cities, SMARTEES will make use of available GIS data. In
computational models of the cities, the GIS data may be linked to other
variables. For example, represented clusters of households (i.e. city
neighbourhoods) may be assigned values of other relevant variables (e.g. mean
energy use per month). This will be done by using central tendency measures
for aggregations of units, so that at no stage individual household data, or
data concerning geographical locations of individual respondents is made
public.
## Open data pilot
SMARTEES provides access to all primary data collected and secondary data
aggregations where publication of the data does not collide with copyrights of
the initial data providers. This is in line with the H2020 open data pilot
openaire (se e _https://www.openaire.eu/opendatapilot_ ) . Data will be made
available as soon as SMARTEES primary research and publication interests are
fulfilled. No embargo period is implemented once the SMARTEES publications are
finished and no restrictions are foreseen to be put on the re-use of the data
at this point. WP1 and WP4 are responsible for providing open access to the
data.
### General principles
All data in SMARTEES shall be open access if no other important principles
stand against it (such as for example restrictions on secondary data). In this
respect, the Consortium Agreement and the Grant Agreement are binding,
especially Section 9 (“Access rights”) and Section 10 (“Non-disclosure of
information”) of the Consortium Agreement are relevant for determining
potential need for access restriction to SMARTEES data.
### Size of the data
The size of the data files is not determined at this point of the SMARTEES
project yet. The questionnaire surveys are expected to have at least 5000
participants in total. A single focus group interview will typically have
between 6 and 12 participants. This section will be updated as soon as more
information is available.
### Target group for the data use
The data provided in SMARTEES will be of interest for policy makers,
businesses in the energy sector, stakeholder groups and other researchers.
They will be documented and presented in a way that makes them accessible for
non-scientists.
### Access procedures
The data made available through the open data pilot will be fully accessible
without any restrictions (if not exploitation benefits require an embargo
period, see Section 4.7).
### Documentation procedures
All data files provided by SMARTEES include a documentation of the content of
the data file and the context the data was collected in (see Section 4.2.2).
It is important to ensure usefulness of the data for researchers and analysist
not foreseen in the data collection. The documentation procedures will be
constantly updated during the SMARTEES project.
### Securing interoperability
For social science data it is essential to document the use and source of
theoretical concepts leading to data collections to ensure interoperability
across different user groups. SMARTEES will create a glossary defining key
terms and concepts used in the project and an ontology (WP2). This glossary
and the theoretical work conducted on WP2 will be part of the data
documentation. Furthermore, sources for theoretical concepts and variables
measures will be documented to ensure comparability with previous and future
use. For quantitative data, the psychometric performance of the variables will
be documented. The use of theoretical concepts will be standardized within
SMARTEES in WP2 and with previous use of the variables and concepts wherever
possible (there might be a number of newly developed concepts in SMARTEES,
which cannot be aligned with previous work).
### Search keywords and data identification
Each data set will be assigned a unique and persistent Digital Object
Identifier (DOI) to make it identifiable when stored in a data repository.
Each fill will be tagged with keywords for search purposes: SMARTEES is always
a keyword, furthermore, keywords describing the type of the data (e.g.,
“interview”, “survey”), the participants (“representative sample”), the type
of topics included (e.g., “energy poverty”, “consumer empowerment”).
### File types
Each data file in SMARTEES will be made available with an accompanying
documentation of its content (see Section 4.2.2). Qualitative data such as
interview transcripts will be made available in its entirety in the form of
text documents (e.g. in PDF, TXT, RTF or DOCX format) in their original
language. In addition, excerpts of transcripts and other qualitative data will
be made available in English. Quantitative data will be made available in
standard data formats for popular statistical program packages to make reuse
as easy as possible (e.g. CSV, SAV, or R with popular character encoding such
as ASCII or UTF-8 without BOM).
### Curation of the data after the end of SMARTEES
All data in SMARTEES will be transferred into a data repository that is the
state of the art.
# APPENDIX I: DETAILED DATA COLLECTION RESPONSIBILITIES
The following table presents a complete summary of all data collections in
SMARTEES with the responsibilities indicated. The table will be constantly
updated during the project as soon as data collections are started.
<table>
<tr>
<th>
ID
</th>
<th>
DATA
COLLECTION
</th>
<th>
SOURCE
</th>
<th>
DATA
</th>
<th>
WHEN
</th>
<th>
FORMAT
</th>
<th>
DATA FLOW
</th>
<th>
RESPONSIBLE FOR
DATA PRODUCTION
& MONITORING
</th>
<th>
RESPONSIBLE FOR
DATA PREPARATION
(TRANSCRIPTION,
DATA CLEANING)
</th>
<th>
RESPONSIBLE FOR
STORAGE /
PSEUDONYMIZATION
</th>
<th>
RESPONSIBLE
FOR TRANSFER
TO LONG-TERM
STORAGE
</th>
<th>
DATA TO BE
USED BY
</th>
<th>
WP/TASK
</th> </tr>
<tr>
<td>
ID1
</td>
<td>
Documents and records
</td>
<td>
Documents published by relevant stakeholders
</td>
<td>
Existing data and records on relevant case study clusters
</td>
<td>
M3-M10
</td>
<td>
Text file, numerical data
</td>
<td>
WP3 identify documents/records. WP4 supplement these existing data with new
data and integrate them. WP5 will define future policy scenarios based in
insight from WP4. WP6 analyses the data provided by WP4, and provides input to
WP7.
Further utilities of the data will be carried out in WP8.
</td>
<td>
WPs 1, 3 and 4
</td>
<td>
WPs 3 and 4
</td>
<td>
WPs 1, 3 and 4
</td>
<td>
WPs 1, 3 and 4
</td>
<td>
WPs 4-8
</td>
<td>
1.3, 2.3, WP3,
4.1, 4.2, 4.9,
WP5, 6.1-6.3,
WPs 7-8
</td> </tr>
<tr>
<td>
ID2
</td>
<td>
Interviews
</td>
<td>
Stakeholders interviews
</td>
<td>
Responses of key informants and stakeholders
</td>
<td>
M1-M10
</td>
<td>
Text file
</td>
<td>
WP4 will develop interview protocol. Interview with relevant stakeholders will
be conducted as part of WP3. Data are exploited in WPs 4, 5, 6 and 7 (analyses
and synthesis), and further utilized in WP8 as part of the dissemination
efforts.
</td>
<td>
WPs 1 and 3
</td>
<td>
WP 3
</td>
<td>
WPs 1, 3 and 4
</td>
<td>
WPs 1, 3 and 4
</td>
<td>
WPs 4-8
</td>
<td>
1.3, 2.3, WP3,
4.4, WP5, 6.1-
6.3, WPs 7-8
</td> </tr>
<tr>
<td>
ID3
</td>
<td>
“On site” visits and observations
</td>
<td>
Observations, interviews
</td>
<td>
Observations, interviews with relevant stakeholders at the site visited
</td>
<td>
M1-M10
</td>
<td>
Text file
</td>
<td>
"On site” visits and observations are carried out as part of the task in WP3.
Data are exploited in WPs 4, 5, 6 and 7 (analyses and synthesis), and further
utilized in WP8 as part of the dissemination efforts.
</td>
<td>
WPs 1 and 3
</td>
<td>
WP 3
</td>
<td>
WPs 1, 3 and 4
</td>
<td>
WPs 1, 3 and 4
</td>
<td>
WPs 4-8
</td>
<td>
1.3, 2.3, WP3,
4.4, WP5, 6.1-
6.3, WPs 7-8
</td> </tr>
<tr>
<td>
ID4
</td>
<td>
Questionnaire surveys
</td>
<td>
Questionnaire
</td>
<td>
Participants’ responses to questionnaire(s)
</td>
<td>
M7-M15
</td>
<td>
Numerical data
</td>
<td>
WP3 provides input to the designing of questionnaire in WP4. Data collection
is performed by hired data collection company(s) under the supervision of WP 1
and 4. Data are exploited in WPs 4, 5, 6 and 7 (analyses and synthesis), and
further utilized in WP8 as part of the dissemination efforts.
</td>
<td>
WPs 1 and 4
</td>
<td>
WP4
</td>
<td>
WPs 1 and 4
</td>
<td>
WPs 1 and 4
</td>
<td>
WPs 4-8
</td>
<td>
1.3, 2.3, 3.4,
WPs 4-5, 6.1-6.3,
WPs 7-8
</td> </tr>
<tr>
<td>
ID5
</td>
<td>
Focus groups
</td>
<td>
Focus group discussions
</td>
<td>
Responses of focus group participants
</td>
<td>
M1-M10
</td>
<td>
Text file
</td>
<td>
WP4 will develop interview protocol. Focus group interview with relevant
stakeholders will be conducted as part of WP3. Data are exploited in WPs 4, 5,
6 and 7 (analyses and synthesis), and further utilized in WP8 as part of the
dissemination efforts.
</td>
<td>
WPs 1 and 3
</td>
<td>
WP 3
</td>
<td>
WPs 1, 3 and 4
</td>
<td>
WPs 1, 3 and 4
</td>
<td>
WPs 4-8
</td>
<td>
1.3, 2.3, WP3,
4.4, WP5, 6.1-
6.3, WPs 7-8
</td> </tr>
<tr>
<td>
ID6
</td>
<td>
Discussions of events
</td>
<td>
Discussion with stakeholders
</td>
<td>
Responses of stakeholder/ participants during discussion
of events
</td>
<td>
M3-M36
</td>
<td>
Text file
</td>
<td>
WP4 will arrange a series of discussions with project partners, case-study
teams and agent-based modelling teams about requirements for primary data
collection. WP5 will engage a sample of citizens, consumers, social and
business actors to discuss future energy policy scenarios. Data are exploited
in WPs 3, 4, 5, 6 and 7 (analyses and synthesis), and further utilized in WP8
as part of the dissemination efforts.
</td>
<td>
WPs 1, 4 and 5
</td>
<td>
WPs 4 and 5
</td>
<td>
WPs 1, 4 and 5
</td>
<td>
WPs 1, 4 and 5
</td>
<td>
WPs 3-8
</td>
<td>
1.3, 2.3, WPs 4-5
, 6.1-6.3, WPs 7-
8
</td> </tr>
<tr>
<td>
ID7
</td>
<td>
Workshops
</td>
<td>
Workshop discussions
</td>
<td>
Responses of workshop
participants
</td>
<td>
M5-M36
</td>
<td>
Text file
</td>
<td>
Data on views of relevant experts and stakeholders are collected during three
interdisciplinary research workshops on theory and method (organized by
EIJKU), a data analysis workshop (organized by JH), a series of multi-
stakeholders deliberative workshops on future policy scenarios (organized by
UOT & UDC), Data are exploited in WPs3, 4, 5, 6 and 7 (analyses and
synthesis), and further utilized in WP8 as part of the dissemination efforts.
</td>
<td>
WPs 1, 2, 4 and 5
</td>
<td>
WPs 2, 4 and 5
</td>
<td>
WPs 2, 4 and 5
</td>
<td>
WPs 2, 4 and 5
</td>
<td>
WPs 3-8
</td>
<td>
1.3, WP2, 3.2,
WPs 4-5, 6.1-6.3,
WPs 7-8
</td> </tr> </table>
**PROJECT NO. REPORT NO. VERSION**
Project No. 763912 SMARTEESD1.2 Data Management Plan-1.2 01 27 of 34
# APPENDIX II: CONSENT FORMS
This appendix collects all consent forms and information sheets used in
SMARTEES. The first included document is the template for SMARTEES information
sheets based on the general template provided by the Norwegian Centre for
Research Data (NSD).
(1) Consent form template from NSD.
#### Request for participation in research project
_**[This template serves as an example of an information letter.** _
_**Clear text and insert your own.** _
_**NB - all information should be concise and easy to understand]** _
##### "[Insert _project title_ ]"
**Background and Purpose**
[ **Describe** the purpose of the project and briefly sketch the main research
topics. Indicate whether the project is a master’s or Ph.D. project at the
institution, whether it is implemented as commissioned research, in
cooperation with other institutions, etc.]
<table>
<tr>
<th>
[ **Describe** how the sample has been selected and/or why the person has been
requested to
</th> </tr>
<tr>
<td>
participate.]
</td>
<td>
</td> </tr> </table>
**What does participation in the project imply?**
[ **Describe** the main features of the project: data collection that requires
active participation (surveys, interviews, observation, tests, etc.,
preferably describing approximate duration), and any collection of data about
the participant from other sources (registers, records, student files, other
informants, etc.). **Describe** the types of data to be collected (e.g.
"questions will concern..."), and the manner(s) in which data will be
collected (notes, audio/video recordings, etc.).
If parents are to give consent on behalf of their children, inform that they
can request to see the questionnaire/interview guide etc. If multiple sample
groups are to be included in the project, it must be explicitly indicated what
participation entails for each group, alternatively, a separate information
letter must be made for each group.]
**What will happen to the information about you?**
All personal data will be treated confidentially. [ **Describe** who will have
access to personal data (e.g. only the project group, student and supervisor,
data processor, etc.), and how personal data/recordings will be stored to
ensure confidentiality (e.g. if a list of names is stored separately from
other data).]
[ **Describe** whether participants will be recognizable in the publication or
not.]
The project is scheduled for completion by [ **insert** date]. [ **Describe**
what will happen to personal data and any recordings at that point. If the
data will not be made anonymous by
<table>
<tr>
<th>
project completion: state the purpose of further storage/use, where data will
be stored, who
</th> </tr>
<tr>
<td>
will have access, as well as the final date for anonymization (or information
about personal
</td> </tr>
<tr>
<td>
data being stored indefinitely).]
</td>
<td>
</td> </tr> </table>
**Voluntary participation**
It is voluntary to participate in the project, and you can at any time choose
to withdraw your consent without stating any reason. If you decide to
withdraw, all your personal data will be made anonymous. [For patients and
others in dependent relationships, it must be stated that it will not affect
their relationships with clinicians or others, if they do not want to
participate in the project, or if they at a later point decide to withdraw.]
If you would like to participate or if you have any questions concerning the
project, please contact [ **Insert** name and telephone number of the project
leader. In student projects contact information of the supervisor should also
be inserted].
The study has been notified to the Data Protection Official for Research, NSD
- Norwegian Centre for Research Data.
#### Consent for participation in the study
<table>
<tr>
<th>
**[Consent may be attained in writing or verbally. If consent is obtained in
writing from**
</th> </tr>
<tr>
<td>
**the participant, you can use the formulation below. If parents/guardians are
to give**
</td>
<td>
</td> </tr>
<tr>
<td>
**consent on behalf of their children or others with reduced capacity to give
consent, the**
</td> </tr>
<tr>
<td>
**consent form must be adapted, and the participant’s name should be
stated.]**
</td>
<td>
</td> </tr> </table>
I have received information about the project and am willing to participate
\-------------------------------------------------------------------------------------------------------------
(Signed by participant, date)
**[Checkboxes can be used (in addition to signature) if the project is
designed in such a way that the participant can choose to give consent to some
parts of the project without participating in all parts (e.g. questionnaire,
but not interview), or if information is to be obtained from other sources,
especially when the duty of confidentiality must be set aside in order for the
information about the participant to be disclosed._Examples: - I agree to
participate in the interview / - I agree that information about me may be
obtained from teacher/doctor/register - I agree that my personal information
may be published/saved after project completion]_ **
# APPENDIX III: CONFIRMATIONS BY DATA PROTECTION OFFICERS
All confirmations regarding the conduction of data collection in accordance
with national and international law, especially the General Data Protection
Regulation (Regulation (EU) 2016/679) will be collected in this appendix as
soon as they are available.
# APPENDIX IV: DATA DOCUMENTATION TEMPLATES
The following two templates shall be used to document the necessary background
of the data files for internal and external use in SMARTEES.
1. Data documentation template for qualitative data in SMARTEES
2. Data documentation template for quantitative data in SMARTEES
_Data documentation template for qualitative data in SMARTEES_
Name of the data set: ___________ Date the data set was finalized:
_________________
Date/time period the data was collected: ____________ to ____________.
Responsible partner for the collection of the data:
_________________________________ (name) ____________________________
(institution)
Data produced in WP: ____________ Task: ____________
Data pseudonymized on (date): _______________ by __________________________
Data anonymized on (date): _______________ by __________________________
_Information about the participants:_
Number: __________ Age: _________________ Sex: ___________________
Participants’ background: ____________________________________________________
Recruitment procedure: _____________________________________________________
Original language of the material:
_____________________________________________
Data collected by (interviewer):
_______________________________________________
Transcribed by: ___________________________________________________________
Transcription rules: ________________________________________________________
Translated to English by: ___________________________________________________
Ethically cleared by: _____________________________________ on (date):
_______________
Interview guidelines (or the like):
_______________________________________________________
Size of the data (e.g. number of words):
_________________________________________________
Short summary: ______________________________________________ _Data
documentation template for quantitative data in SMARTEES_
Name of the data set: ___________ Date the data set was finalized:
_________________
Date/time period the data was collected: ____________ to ____________.
Responsible partner for the collection of the data:
_________________________________ (name) ____________________________
(institution)
Data produced in WP: ____________ Task: ____________
Data pseudonymized on (date): _______________ by __________________________
Data anonymized on (date): _______________ by __________________________
_Information about the participants:_
Number: __________ Age: _________________ Sex: ___________________
Participants’ representative for which population:
__________________________________________
Recruitment procedure: _____________________________________________________
Response rate: ___________________________________________________________
Original language of the material:
_____________________________________________
Translated to English by: ___________________________________________________
Ethically cleared by: _____________________________________ on (date):
_______________
Variables in the dataset:
<table>
<tr>
<th>
**Variable name**
</th>
<th>
**Variable type**
</th>
<th>
**Variable label**
</th>
<th>
**Answering format / value labels**
</th>
<th>
**Comments**
</th> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
_Variable types:_
* T = text
* D = date / time
* B = binary / dichotomous
* C = categorical
* O = ordered categorical / ordinal
* I = interval / ratio / Likert scales with 5 or more categories
Short summary: ______________________________________________
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1297_IMAGINE_764066.md
|
**Introduction**
</th> </tr> </table>
## 1.1 Purpose
IMAGINE is a Horizon 2020 project with the main objective to develop a new
Electro-Mechanical Generator (EMG) for wave energy applications. The EMG will
have direct commercial applications and is based on proprietary technology
owned by the project lead partner. This Data Management Plan (DMP) is an
update to the first DMP, which was published in M6 (August 2018), and will
review the data that is and will be created, documented, stored and share in
the IMAGINE project.
The first DMP detailed the generated data that would be exploited according to
the Open Research Data (ORD) guidance and adopting the Findable, Accessible,
Interoperable and Reusable (FAIR) approach whilst protecting the results due
to commercial sensitivities related to the development of a commercial
technology.
The main purpose of this intermediate version of the DMP is to update the list
of potential Open Access information and documents, as determined in the first
DMP, and indicate any changes within the consortium.
The previous and first version of the DMP was published in M6 (August 2018) of
the IMAGINE project, the final version with a final review will be presented
in M30 (August 2020) of the project.
## 1.2 Scope
The main scope of the first version of the DMP (D8.3) was providing an
overview of the results generated within the IMAGINE project and indicating
the potential open access documents following the ORD guidance, without going
into the specific communication and dissemination strategy as this is
discussed in the Plan for the Exploitation and Dissemination of Results (PEDR
D8.4). The DMPs also do not report on the Intellectual Property Rights (IPR)
as this is part of the Project Management Plan (D1.2).
The scope of this intermediate version of the DMP (D8.6) is building on the
initial DMP and provides an update to the list of general and underlying
datasets and summary level reports that were identified as potentially
suitable for sharing and dissemination.
## 1.3 Definitions
This section provides definitions for terms used in this document.
<table>
<tr>
<th>
Project partners/ consortium
</th>
<th>
The organization constituted for the purposed of the IMAGINE project
comprising of:
* UMBRAGROUP spa (UMBRA),
* VGA srl (VGA),
* Bureau Veritas Marine & Offshore (BV),
* K2Management Lda (K2M), formerly Cruz Atcheson Consulting Engineers (CA), • Norges Teknisk-Naturvitenskapelige Universitet (NTNU),
* The University of Edinburgh (UEDIN).
</th> </tr>
<tr>
<td>
Dataset
</td>
<td>
Digital information created in the course of research, but which is not a
published research output. Research data excludes purely administrative
records. The highest priority research data is that which underpins a research
output. Research data do not include publications, articles, lectures or
presentations.
</td> </tr>
<tr>
<td>
Data Type [1]
</td>
<td>
* R: Document, report (excluding the periodic and final reports)
* DEM: Demonstrator, pilot, prototype, plan designs
* DEC: Websites, patents filing, press & media actions, videos, etc.
* OTHER: Software, technical diagram, etc.
</td> </tr>
<tr>
<td>
Dissemination Level [1]
</td>
<td>
* PU: Public, fully open, e.g. web
* CO: Confidential, restricted under conditions set out in Grant Agreement
* CI: Classified, information as referred to in Commission Decision 2001/844/EC.
</td> </tr>
<tr>
<td>
Metadata
</td>
<td>
Information about datasets stored in a repository/database template, including
size, source, author, production date etc.
</td> </tr>
<tr>
<td>
Repository
</td>
<td>
A digital repository is a mechanism for managing and storing digital content.
</td> </tr> </table>
## 1.4 Abbreviations and Acronyms
<table>
<tr>
<th>
DMP
</th>
<th>
Data Management Plan (this document)
</th> </tr>
<tr>
<td>
EC
</td>
<td>
European Commission
</td> </tr>
<tr>
<td>
EDM
</td>
<td>
Exploitation and Dissemination Manager
</td> </tr>
<tr>
<td>
EMG
</td>
<td>
Electro-Mechanical Generator
</td> </tr>
<tr>
<td>
FAIR
</td>
<td>
Findable, Accessible, Interoperable, and Reusable
</td> </tr>
<tr>
<td>
FTP
</td>
<td>
File Transfer Protocol
</td> </tr>
<tr>
<td>
HALT
</td>
<td>
High Accelerated Life Test
</td> </tr>
<tr>
<td>
HWIL
</td>
<td>
Hard Ware-In-the-Loop
</td> </tr>
<tr>
<td>
IPR
</td>
<td>
Intellectual Property Rights
</td> </tr>
<tr>
<td>
PEDR
</td>
<td>
Plan for the Exploitation and Dissemination of Results
</td> </tr>
<tr>
<td>
PMP
</td>
<td>
Project Management Plan
</td> </tr>
<tr>
<td>
PTO
</td>
<td>
Power Take-Off
</td> </tr>
<tr>
<td>
OA
</td>
<td>
Open Access
</td> </tr>
<tr>
<td>
ORD
</td>
<td>
Open Research Data
</td> </tr>
<tr>
<td>
WEC
</td>
<td>
Wave Energy Convertor
</td> </tr>
<tr>
<td>
WP
</td>
<td>
Work Package
</td> </tr> </table>
## 1.5 Reference
N/A
<table>
<tr>
<th>
**2**
</th>
<th>
**Data Summary**
</th> </tr> </table>
## 2.1 Assessment of existing data
This project builds on a number of prior research topic areas and scientific
work, some of which are proprietary company IP and others that are open or
published literature. These sources are accessible by the project partners and
are summarised in Table 1 in relation to each of the project work packages
(WP).
<table>
<tr>
<th>
**Background Data Description**
</th>
<th>
**Origin**
</th>
<th>
**Accessibility Level**
</th>
<th>
**Related WP**
</th> </tr>
<tr>
<td>
WEC-Sim - the open source software tool for modelling
</td>
<td>
National Renewable Energy Laboratory
</td>
<td>
PU
</td>
<td>
</td> </tr>
<tr>
<td>
**Background Data Description**
</td>
<td>
**Origin**
</td>
<td>
**Accessibility Level**
</td>
<td>
**Related WP**
</td> </tr>
<tr>
<td>
wave energy systems and related published case studies.
</td>
<td>
(NREL) and Sandia National
Laboratories (Sandia)
</td>
<td>
</td>
<td>
WP2 - EMG
Integrability Analysis and Specifications
</td> </tr>
<tr>
<td>
Modified version of the WECSim numerical code.
</td>
<td>
K2M
</td>
<td>
CO
</td> </tr>
<tr>
<td>
European wave energy resource data.
</td>
<td>
Published metocean datasets
</td>
<td>
PU
</td> </tr>
<tr>
<td>
EMG design data, prior test results and manufacturing processes.
</td>
<td>
UMBRA
</td>
<td>
CO
</td>
<td>
WP3 EMG Prototype
Design and
Fabrication
</td> </tr>
<tr>
<td>
Design and manufacturing standards.
</td>
<td>
Standards organisations
</td>
<td>
PU
</td> </tr>
<tr>
<td>
Test bench design data and manufacturing processes and associated standards.
</td>
<td>
VGA
</td>
<td>
CO
</td>
<td>
WP4 HWIL Test
Bench Design and
Fabrication
</td> </tr>
<tr>
<td>
Design and manufacturing standards.
</td>
<td>
Standards organisations
</td>
<td>
PU
</td> </tr>
<tr>
<td>
Control system strategies and methodologies.
</td>
<td>
NTNU, published research
</td>
<td>
PU
</td>
<td>
WP5 - Control System
Design and
Implementation
</td> </tr>
<tr>
<td>
Techno-economic model.
</td>
<td>
UEDIN
</td>
<td>
CO
</td>
<td>
WP7 - Technoeconomical assessment
</td> </tr>
<tr>
<td>
Model input data (e.g. cost data).
</td>
<td>
Published research
</td>
<td>
PU
</td> </tr> </table>
**Table 1: Existing datasets (PU: Public CO: Confidential)** For completeness,
the remaining project Work Packages are:
* WP1 - Project Management
* WP6 - EMG experimental testing
* WP8 - Project results exploitation and dissemination
## 2.2 Information on new datasets/outputs
In order to advance and demonstrate the development of the EMG and deliver the
project objectives, the IMAGINE project will produce a range of new outputs.
The results of this project are planned to be exploited commercially and as
such the dissemination level specified for all but one of the project
Deliverables is specified as Confidential. Nevertheless, in the first DMP a
preliminary list of general and underlying datasets and summary level reports
have been identified as potentially suitable for sharing and dissemination.
Table 2 is the updated list of ‘Open Access Content’, presented in relation to
the project Deliverables. The data will comprise of quantitative and
qualitative data in a variety of easily accessible formats, including, Excel
(XLSX, CSV), Word (DOCX), Power Point (PPTX), image (JPEG, PNG, GIF, TIFF,
MPEG), Portable Document Format (PDF). This table includes the data utility,
the responsible partner, the medium for dissemination (digital reports and
datasets; conference papers and scientific papers), as well as the documents
(close to being) published at the time of the intermediate DMP. The columns
‘responsible partner’ and ‘documents’ are introduced in this intermediate DMP.
<table>
<tr>
<th>
**Open Access Content List - Intermediate Issue**
</th> </tr>
<tr>
<td>
**ID**
</td>
<td>
**Linked**
**WP**
</td>
<td>
**Linked Deliv.**
</td>
<td>
**Description**
</td>
<td>
**Utility / Users**
</td>
<td>
**Responsible Partners**
</td>
<td>
**Online Report**
</td>
<td>
**Online Datasets**
</td>
<td>
**Conf. paper**
</td>
<td>
**Journal paper**
</td>
<td>
**Documents**
</td>
<td>
**Format, Volume**
</td> </tr>
<tr>
<td>
1
</td>
<td>
2
</td>
<td>
2.3
</td>
<td>
Scientific paper summarising preliminary load assessment for the EMG.
</td>
<td>
Relevant to wave energy project developers.
</td>
<td>
K2
</td>
<td>
N
</td>
<td>
N
</td>
<td>
Y
</td>
<td>
TBD
</td>
<td>
EWTEC2019 paper
</td>
<td>
PDF <100MB
</td> </tr>
<tr>
<td>
2
</td>
<td>
3
</td>
<td>
3.1
</td>
<td>
Summary design and arrangements
</td>
<td>
Relevant to wave energy project developers.
</td>
<td>
UEDIN/
UMBRA
</td>
<td>
Y
</td>
<td>
N
</td>
<td>
Y
</td>
<td>
TBD
</td>
<td>
EWTEC2019 paper
</td>
<td>
PDF <100MB
</td> </tr>
<tr>
<td>
3
</td>
<td>
4
</td>
<td>
4.1, 4.2
</td>
<td>
Summary of HWIL hardware and software system descriptions.
</td>
<td>
Provides test bench capability information to
other test rig operators and users - good for publicity.
</td>
<td>
VGA
</td>
<td>
N
</td>
<td>
N
</td>
<td>
N
</td>
<td>
TBD
</td>
<td>
</td>
<td>
DOCX <100MB
</td> </tr>
<tr>
<td>
4
</td>
<td>
5
</td>
<td>
5.1, 5.2
</td>
<td>
Summary report of control strategies investigated.
</td>
<td>
Control system, PTO and WEC developers.
</td>
<td>
NTNU
</td>
<td>
Y
</td>
<td>
Y
</td>
<td>
Y
</td>
<td>
TBD
</td>
<td>
</td>
<td>
DOCX <100MB
</td> </tr>
<tr>
<td>
5
</td>
<td>
6
</td>
<td>
6.2
</td>
<td>
Summary of HWIL and HALT test outcomes including generator perfomance curves.
</td>
<td>
WEC developers and other potential users.
</td>
<td>
UMBRA
</td>
<td>
Y
</td>
<td>
Y
</td>
<td>
Y
</td>
<td>
TBD
</td>
<td>
EWTEC2019 paper
</td>
<td>
DOCX, XLSX
<100MB
</td> </tr>
<tr>
<td>
6
</td>
<td>
6
</td>
<td>
6.2
</td>
<td>
Pictures and videos of physical testing.
</td>
<td>
Potential users as well to inform wider stakeholders and society.
</td>
<td>
UMBRA
</td>
<td>
N
</td>
<td>
Y
</td>
<td>
N
</td>
<td>
N
</td>
<td>
</td>
<td>
JPG,
MPEG
<10GB
</td> </tr>
<tr>
<td>
7
</td>
<td>
7
</td>
<td>
7.1, 7.2,
7.3, 7.4
</td>
<td>
Summary report of techno-economic (LCOE LCA, SCOE).
</td>
<td>
Researchers and modellers, WEC developers, users.
</td>
<td>
UEDIN
</td>
<td>
Y
</td>
<td>
N
</td>
<td>
Y
</td>
<td>
TBD
</td>
<td>
</td>
<td>
DOCX, XLSX
<100MB
</td> </tr>
<tr>
<td>
8
</td>
<td>
8
</td>
<td>
8.2
</td>
<td>
IMAGINE Website
</td>
<td>
All stakeholders
</td>
<td>
UMBRA
</td>
<td>
\-
</td>
<td>
\-
</td>
<td>
\-
</td>
<td>
\-
</td>
<td>
News;
Electronic newsletter 1
</td>
<td>
</td> </tr>
<tr>
<td>
9
</td>
<td>
8
</td>
<td>
8.4, 8.7
</td>
<td>
Summary of workshops report
</td>
<td>
All stakeholders
</td>
<td>
UMBRA
</td>
<td>
Y
</td>
<td>
N
</td>
<td>
N
</td>
<td>
N
</td>
<td>
</td>
<td>
DOCX<10 0MB
</td> </tr>
<tr>
<td>
10
</td>
<td>
8
</td>
<td>
8.3, 8.6
</td>
<td>
IMAGINE Data Management Plan
</td>
<td>
All stakeholders - maintain transparency to all
stakeholders in line with H2020 expectations.
</td>
<td>
UEDIN/
UMBRA
</td>
<td>
Y
</td>
<td>
N
</td>
<td>
N
</td>
<td>
N
</td>
<td>
</td>
<td>
DOCX<10 0MB
</td> </tr> </table>
**Table 2: List of Open Access data - description, utility and type – in
relation to project WPs and Deliverables (TBD: to be decided) – intermediate
version**
Table 3 presents the list of the public documents generated in the IMAGINE
project at the time of the intermediate DMP (M18).
<table>
<tr>
<th>
**Type**
</th>
<th>
**Document**
</th> </tr>
<tr>
<td>
Conference Presentation
</td>
<td>
ICOE presentation 2018 – Innovation Policy Pathways for the Commercialisation
of Marine Energy
</td> </tr>
<tr>
<td>
Conference Paper
</td>
<td>
EWTEC paper 2019 - Progress update on the development and testing of an
advanced power take-off for marine energy applications (lead author UEDIN)
</td> </tr>
<tr>
<td>
Conference Paper
</td>
<td>
EWTEC paper 2019 - Preliminary Load Assessment: UMBRA’s 250kW EMG Power Take-
Off (lead author K2M)
</td> </tr>
<tr>
<td>
Conference Paper
</td>
<td>
EWTEC paper 2019 - Numerical and experimental test on a large scale model of a
pivoting wave energy conversion system (lead author UMBRA)
</td> </tr>
<tr>
<td>
Electronic newsletter
</td>
<td>
IMAGINE electronic newsletter, December 2019 – available on IMAGINE website
</td> </tr> </table>
**Table 3: List of open access data documents – intermediate version**
## 2.3 Stakeholder Responsibility
In the first DMP (D8.3), the responsibilities of the application of the DMP
was indicated, of which the overview is shown in Table 4. The last column of
Table 4 indicates the actions of these stakeholders between the first version
and intermediate version of the DMP.
<table>
<tr>
<th>
**Stakeholder**
</th>
<th>
**Responsibility**
</th>
<th>
**Intermediate DMP action**
</th> </tr>
<tr>
<td>
EDM –
Mr Henry Jeffrey
</td>
<td>
Responsible for the overall application of the strategy of the DMP (and PEDR)
and the transfer of the benefits from the IMAGINE results to all project
partners and wider civic society.
</td>
<td>
UEDIN to update the DMP.
</td> </tr>
<tr>
<td>
**Stakeholder**
</td>
<td>
**Responsibility**
</td>
<td>
**Intermediate DMP action**
</td> </tr>
<tr>
<td>
Project
Coordinator – Mr
Luca Castellini
</td>
<td>
Make sure that the project partners are aware of the DMP and their
responsibilities regarding the data and results from the IMAGINE project.
</td>
<td>
Ensure all project partners have updated their data management and security
and their activities in the ‘open access data’ content list.
</td> </tr>
<tr>
<td>
All project partners
</td>
<td>
Prepare quality-controlled data in correct format and sharing through agreed
publication channel. The share-ability of the data/results should be confirmed
with the Project Coordinator.
</td>
<td>
Update data management and security policies and measures, where required, and
indicate the activities in the ‘open access data’ content list.
</td> </tr> </table>
**Table 4: Responsibilities of stakeholders involved in open access, including
the actions undertaken at M18 (intermediate version of DMP)**
<table>
<tr>
<th>
**3**
</th>
<th>
**FAIR data**
</th> </tr> </table>
The first DMP (D8.3) indicated the IMAGINE’s application of the Findable,
Accessible, Interoperable, Reusable (FAIR) approach for the project’s results.
The description is presented in Appendix B. A high-level overview of the FAIR
application within IMAGINE is presented in Table 5.
<table>
<tr>
<th>
**FAIR approach**
</th>
<th>
**IMAGINE’s application**
</th>
<th>
**Intermediate DMP action**
</th> </tr>
<tr>
<td>
Findable
</td>
<td>
All IMAGINE’s documents will be identifiable based on a common naming
convention and will include appropriate metadata.
</td>
<td>
</td> </tr>
<tr>
<td>
Accessible
</td>
<td>
Project partners need to protect results that are commercially and
industrially exploitable, in accordance with Article 27 in the Grant
Agreement. Confidential data will be hosted on the IMAGINE
Consortium File Transfer Protocol (FTP) area and Individual partner’s
institutional online repositories. Public datasets and documents will be
published, where appropriate, on the IMAGINE website, research data
repository, journals and the Wave and Tidal Knowledge Network.
</td>
<td>
Identification of preferred research data repository
</td> </tr>
<tr>
<td>
Interoperable
</td>
<td>
A draft metadata set is set-up containing: General information; Sharing/Access
data; Dataset/Output Overview; Methodical information.
</td>
<td>
Confirm metadata set
</td> </tr>
<tr>
<td>
Reusable
</td>
<td>
The data will be reusable through the open access as determined under the
“Accessible” section.
</td>
<td>
</td> </tr> </table>
**Table 5: High-level overview of the FAIR approach applied in the IMAGINE
project, indicating the actions undertaken at M18 (intermediate version of
DMP)**
<table>
<tr>
<th>
**4**
</th>
<th>
**Allocation of resources**
</th> </tr> </table>
The activities related to making the data/outputs open access are anticipated
to be covered within the allocated budget for each work package. The costs of
making scientific publications, hosting a project website and the partners and
open access data repositories are contained within these budgets as eligible
costs.
<table>
<tr>
<th>
**5**
</th>
<th>
**Data security and data management**
</th> </tr> </table>
Each IMAGINE project partner will be obliged to follow the approach set out in
this DMP as well as the requirements stipulated in the Grant Agreement. The
project partners will also comply with their own organisational procedures for
managing research data as listed in Table 6\. The main adjustment in this
chapter is the project partner adjustment after the acquisition of Cruz
Atcheson Consulting Engineers (CA) by K2 Management A/S.
Part of the data management is the selection of the data repository to be used
to maintain research results, appropriate according to the Intellectual
Property Rights. Two repositories have been identified that are used in other
marine energy H2020 projects, the Zenodo repository by the OPERA project and
the EC CORDIS website by all EC framework projects. Some characteristics of
these data repositories are as follows:
* The Zenodo repository, the data will be preserved indefinitely (minimum of 5 years) and there are currently no costs for archiving data in this repository [2]. At the time of publishing this intermediate DMP, the Zenodo repository is the preferred data repository for research data generated within the IMAGINE project that has been/will be identified to be suitable for exploitation.
* CORDIS (Community Research and Development Information Service) contains the project results of all the EU’s framework programmes, as part of the EC’s strategy to disseminate and exploit research results [3]. Therefore, CORDIS already contains the public IMAGINE’s project results.
<table>
<tr>
<th>
**Project partner**
</th>
<th>
**Data management policies**
</th> </tr>
<tr>
<td>
BV
</td>
<td>
Compliance with Bureau Veritas Global IS/IT Charter detailing the rules
governing the use of the Bureau Veritas Information System.
</td> </tr>
<tr>
<td>
K2M
</td>
<td>
Internal company procedures
</td> </tr>
<tr>
<td>
NTNU
</td>
<td>
Policy for research data:
_https://innsida.ntnu.no/c/wiki/get_page_attachment?p_l_id=22780
&nodeId=24646&title=NT _
_NU+Open+Data &fileName=NTNU%20Open%20Data_Policy.pdf _
Policy for information security:
_https://innsida.ntnu.no/wiki//wiki/English/Policy+for+information+security_
Data protection: NTNU has guidelines that adhere to the GDPR (currently
available in Norwegian only)
</td> </tr>
<tr>
<td>
UMBRA
</td>
<td>
Internal company procedures
</td> </tr>
<tr>
<td>
UEDIN
</td>
<td>
Research Data Service - _https://www.ed.ac.uk/information-
services/researchsupport/research-data-service_
Data protection - _https://www.ed.ac.uk/records-management/policy/data-
protection_ Information Security - _https://www.ed.ac.uk/infosec_
</td> </tr>
<tr>
<td>
VGA
</td>
<td>
Internal company procedures
</td> </tr> </table>
**Table 6: IMAGINE project partner data management policies – intermediate
version**
The datasets will be preserved in line with the European Commission Data
Deposit Policy and in line with the policy of the selected repository.
Table 7 gives an overview of the data security measures of the data
repositories in use by the IMAGINE project partners.
<table>
<tr>
<th>
**Project partner/ Repository**
</th>
<th>
**Data security measures**
</th> </tr>
<tr>
<td>
Project specific
FTP
</td>
<td>
The IMAGINE Consortium has setup a File Transfer Protocol (FTP) area where
data share across partners is possible. This area provides a space for
information exchange and an archive for all the documentation produced along
the Project lifespan. This is maintained by VGA, which has also provided
access details to specific individuals.
</td> </tr>
<tr>
<td>
Data Repository
</td>
<td>
If research data is deemed appropriate by the Project lead to be made public,
Zenodo is the preferred data repository at the time of publishing the
intermediate DMP, as this repository has been used by another H2020 project
with similar data protection requirements.
</td> </tr>
<tr>
<td>
BV
</td>
<td>
Data will be stored and managed on Bureau Veritas authorized software,
hardware and network facilities in order to ensure secure file management
according to Bureau Veritas Global IS/IT Charter.
</td> </tr>
<tr>
<td>
K2M
</td>
<td>
Data will be stored locally on K2Management Lda’s computing facilities and NAS
Drive (and its associated back-up cloud).
</td> </tr>
<tr>
<td>
NTNU
</td>
<td>
Data will be hosted locally by NTNU’s IT Services using internal shared drives
and the NTNU’s Sharepoint system, which allows free storage, security, daily
backup, disaster recovery and secure data sharing among research teams.
</td> </tr>
<tr>
<td>
UMBRA
</td>
<td>
Data will be hosted locally by the company’s IT Services using Teamcenter and
SharePoint systems, which allows free storage, security, daily backup,
disaster recovery and secure data sharing among research teams. Data and
information stored in the IMAGINE website are hosted in a third-party
repository contracted by UMBRAGROUP spa.
</td> </tr>
<tr>
<td>
UEDIN
</td>
<td>
Data will be hosted locally by the University’s Information Services using the
University’s Sharepoint system, which allows free storage, security, daily
backup, disaster recovery and secure data sharing among research teams.
</td> </tr>
<tr>
<td>
VGA
</td>
<td>
Data will be hosted locally by the company’s IT Services using IKNOW systems,
which allows storage, security, daily backup, disaster recovery.
</td> </tr> </table>
**Table 7: Data security measures of the IMAGINE project partner’s data
repositories – intermediate version**
Following completion of the project, all the responsibilities concerning data
recovery and secure storage will go to the repository storing the dataset.
This provides options for making data openly available and other data
restricted access as required.
<table>
<tr>
<th>
**6**
</th>
<th>
**Ethical aspects**
</th> </tr> </table>
The IMAGINE project is to comply with the ethical principles set out in
Article 34 of the Grant Agreement. Data collected and produced as part of the
project will be purely technical in nature and will be done so in accordance
with the ethical principles, notably to avoid fabrication, falsification,
plagiarism or other research misconduct.
Personal identifiable information will be collected and stored for the
purposes of communication with stakeholders and other interested parties. When
the project involves access by the partners to personal data, the partners
shall be regarded as responsible for treatment of said data and shall comply
with rules laid down in the Regulation (EU) No 2016/679 of the European
Parliament and of the Council of 27 April 2016 on the protection of
individuals with regard to the processing of personal data and on the free
movement of such data, and repealing EU Directive 95/46/EC (General Data
Protection Regulation), and its transposition to the national laws of the
member states involved, as well as any other applicable national regulations
currently in force or introduced in the future to modify and/or replace it.
<table>
<tr>
<th>
**7**
</th>
<th>
**Other issues**
</th> </tr> </table>
The periodic updates to the DMP are not expected to result in significant
changes but the following changes could occur:
* greater resolution and visibility of the specific data that will be produced by the research and that is appropriate to be made Open Access, whilst respecting commercial sensitivities.
* changes in consortium policies (e.g. new innovation potential, decision to file for a patent).
* changes in consortium composition and external factors (e.g. new consortium members joining or old members leaving).
The first DMP was released in M6, August 2018. This intermediate version is
released in M18, August 2019.
The final update to the DMP will be produced in M30, August 2020.
<table>
<tr>
<th>
**8**
</th>
<th>
**Bibliography**
</th> </tr> </table>
1. European Commission, “Horizon 2020 Research and Innovation Actions Administrative forms (Part A) Research proposal (Part B),,” 21 May 2015. [Online]. Available:
http://ec.europa.eu/research/participants/data/ref/h2020/call_ptef/pt/h2020-call-
pt-ria-ia_en.pdf. [Accessed 18 June 2018].
2. Zenodo, “Zenodo respository homepage,” [Online]. Available: https://www.zenodo.org/. [Accessed 18 June 2018].
3. European Commission, “What's on CORDIS,” [Online]. Available: https://cordis.europa.eu/about/en. [Accessed July 2019].
<table>
<tr>
<th>
**9**
</th>
<th>
**Appendix A – IMAGINE Grant Agreement Extract**
</th> </tr> </table>
### ARTICLE 27 — PROTECTION OF RESULTS — VISIBILITY OF EU FUNDING 27.1
Obligation to protect the results
Each beneficiary must examine the possibility of protecting its results and
must adequately protect them — for an appropriate period and with appropriate
territorial coverage — if:
(a) the results can reasonably be expected to be commercially or industrially
exploited and (b) protecting them is possible, reasonable and justified (given
the circumstances).
When deciding on protection, the beneficiary must consider its own legitimate
interests and the legitimate interests (especially commercial) of the other
beneficiaries.
### 27.2 Agency ownership, to protect the results
If a beneficiary intends not to protect its results, to stop protecting them
or not seek an extension of protection, the Agency may — under certain
conditions (see Article 26.4) — assume ownership to ensure their (continued)
protection.
### 27.3 Information on EU funding
Applications for protection of results (including patent applications) filed
by or on behalf of a beneficiary must — unless the Agency requests or agrees
otherwise or unless it is impossible — include the following:
“The project leading to this application has received funding from the
European Union’s Horizon 2020 research and innovation programme under grant
agreement No 764066”.
### 27.4 Consequences of non-compliance
If a beneficiary breaches any of its obligations under this Article, the grant
may be reduced (see Article 43).
Such a breach may also lead to any of the other measures described in Chapter
6.
### ARTICLE 29 — DISSEMINATION OF RESULTS — OPEN ACCESS — VISIBILITY OF EU
FUNDING 29.1 Obligation to disseminate results
Unless it goes against their legitimate interests, each beneficiary must — as
soon as possible —
‘ **disseminate** ’ its results by disclosing them to the public by
appropriate means (other than those resulting from protecting or exploiting
the results), including in scientific publications (in any medium).
This does not change the obligation to protect results in Article 27, the
confidentiality obligations in Article 36, the security obligations in Article
37 or the obligations to protect personal data in Article 39, all of which
still apply.
A beneficiary that intends to disseminate its results must give advance notice
to the other beneficiaries of — unless agreed otherwise — at least 45 days,
together with sufficient information on the results it will disseminate.
Any other beneficiary may object within — unless agreed otherwise — 30 days of
receiving notification, if it can show that its legitimate interests in
relation to the results or background would be significantly harmed. In such
cases, the dissemination may not take place unless appropriate steps are taken
to safeguard these legitimate interests.
If a beneficiary intends not to protect its results, it may — under certain
conditions (see Article 26.4.1) — need to formally notify the Agency before
dissemination takes place.
### 29.2 Open access to scientific publications
Each beneficiary must ensure open access (free of charge online access for any
user) to all peerreviewed scientific publications relating to its results.
In particular, it must:
1. as soon as possible and at the latest on publication, deposit a machine-readable electronic copy of the published version or final peer-reviewed manuscript accepted for publication in a repository for scientific publications; Moreover, the beneficiary must aim to deposit at the same time the research data needed to validate the results presented in the deposited scientific publications.
2. ensure open access to the deposited publication — via the repository — at the latest:
1. on publication, if an electronic version is available for free via the publisher, or
2. within six months of publication (twelve months for publications in the social sciences and humanities) in any other case.
(c) ensure open access — via the repository — to the bibliographic metadata
that identify the deposited publication.
The bibliographic metadata must be in a standard format and must include all
of the following:
* the terms “European Union (EU)” and “Horizon 2020”;
* the name of the action, acronym and grant number;
* the publication date, and length of embargo period if applicable, and - a persistent identifier.
### 29.3 Open access to research data
Regarding the digital research data generated in the action (‘ **data** ’),
the beneficiaries must: (a) deposit in a research data repository and take
measures to make it possible for third parties to access, mine, exploit,
reproduce and disseminate — free of charge for any user — the following: (i)
the data, including associated metadata, needed to validate the results
presented in scientific publications as soon as possible;
(ii) other data, including associated metadata, as specified and within the
deadlines laid down in the 'data management plan' (see Annex 1);
(b) provide information — via the repository — about tools and instruments at
the disposal of the beneficiaries and necessary for validating the results
(and — where possible — provide the tools and instruments themselves).
This does not change the obligation to protect results in Article 27, the
confidentiality obligations in Article 36, the security obligations in Article
37 or the obligations to protect personal data in Article 39, all of which
still apply.
As an exception, the beneficiaries do not have to ensure open access to
specific parts of their research data if the achievement of the action's main
objective, as described in Annex 1, would be jeopardised by making those
specific parts of the research data openly accessible. In this case, the data
management plan must contain the reasons for not giving access.
### 29.4 Information on EU funding — Obligation and right to use the EU
emblem
Unless the Agency requests or agrees otherwise or unless it is impossible, any
dissemination of results (in any form, including electronic) must: (a) display
the EU emblem and (b) include the following text:
“This project has received funding from the European Union’s Horizon 2020
research and innovation programme under grant agreement No 764066”.
When displayed together with another logo, the EU emblem must have appropriate
prominence. For the purposes of their obligations under this Article, the
beneficiaries may use the EU emblem without first obtaining approval from the
Agency.
This does not however give them the right to exclusive use.
Moreover, they may not appropriate the EU emblem or any similar trademark or
logo, either by registration or by any other means.
### 29.5 Disclaimer excluding Agency responsibility
Any dissemination of results must indicate that it reflects only the author's
view and that the Agency is not responsible for any use that may be made of
the information it contains.
### 29.6 Consequences of non-compliance
If a beneficiary breaches any of its obligations under this Article, the grant
may be reduced (see Article 43).
Such a breach may also lead to any of the other measures described in Chapter
6.
# Appendix B – FAIR data
This Appendix includes the information as described in the first DMP on
publishing according to the FAIR approach.
## Making data findable, including provisions for metadata
The project lead partner, UMBRAGROUP spa, will be responsible for
disseminating this DMP to all project partners. Each project partner will be
responsible for identifying data outputs, of the type described in Table 2, to
be recorded in a central project data register held on the FTP site managed by
UMBRAGROUP spa.
Each project partner will manage and curate their data, including the
production of metadata, and assurance of data quality.
All documents and data sets will follow a naming convention:
1. A unique identification number linking with the dataset with a deliverable.
2. The project title
3. The data descriptor title
4. Optional - A version name or number and month# indicator for items that are updated during the course of the project.
Example: D8.3 IMAGINE Data Management Plan – First Issue_M6
All documents and data sets will be marked with document/version control.
All documents and data created in the project will be identifiable with
appropriate metadata including information on the type of data contained, the
means by which the data was created, details of the creator, and terms of
access, see details in Section 10.3.
## Making data openly accessible
In accordance with Article 27 of the grant agreement (Appendix A), the project
partners are obliged to protect the results where these can be expected to be
commercially or industrially exploited. The outputs of this project are
expected to be commercially exploited using IP owned and generated by the
partners and as a result this limits the level of openness. Exploitation may
involve a number of activities that have to be protected: using them in
further research activities; developing, creating or marketing a product or
process; creating and providing a service; or using them in standardization
activities.
The Public datasets and reports identified in Table 2 will be published in
full as appropriate in the following locations:
* The IMAGINE project website _www.h2020-imagine.eu_ \- either in full or signposting another location.
* A research data repository, compliant with OpenAIRE guidance [3]. A repository selection process will be completed by the intermediate review of the DMP due in M18. The default repository, in the event that a preferred repository is not identified, will be Zenodo [4] that provides discoverable and secure hosting with management and monitoring provisions. The repository will host scientific publications and all datasets associated with such publications.
* Concerning scientific publications, prestigious peer-reviewed journals relevant to the sector will be targeted. When the research data related to these publications is needed to be published to validate the results presented, this will be either self-archived ‘green’ OA or immediately published ‘gold’ OA as appropriate [5]. The specific journal(s) will depend on the results obtained, but potential options are given below.
* Ocean Engineering, Elsevier (ISSN 0029-8018)
* Renewable Energy, Elsevier (ISSN 0960-1481)
* IEEE Transactions on Sustainable Energies (ISSN 1949-3029)
* Energies, MDPI (ISSN 1996-1073)
* The marine energy industry focussed Wave and Tidal Knowledge Network. [6] The Confidential project data sets and reports will be hosted on:
* The IMAGINE Consortium File Transfer Protocol (FTP) area where data share across partners is possible. This area provides a space for information exchange and an archive for all the documentation produced along the Project lifespan.
* Individual partner’s institutional online repositories will host and preserve data until the end of the project.
## Making data interoperable
The data will be collected and shared in a standardised way using a standard
format for that data type. As required reference will be made to any software
required to run it but given the scope of this project it is not anticipated
that non-standard or uncommon software will be used. Barriers to access
through interoperability issues are not anticipated.
The metadata format will follow the convention of the hosting research data
repository, see Section 10.2. A draft metadata format is set out below and
this is subject to review in the next DMP update.
General Information
* Title of the dataset/output
* Dataset Identifier (using the naming convention outlined in Section 10.1.)
* Responsible Partner
* Work Package
* Author Information
* Date of data collection/production
* Geographic location of data collection/ production
* The title of project and Funding sources that supported the collection of the data i.e. European Union’s Horizon 2020 research and innovation programme under grant agreement No 764066. Sharing/Access Information
* Licenses/access restrictions placed on the data
* Link to data repository
* Links to other publicly accessible locations of the data, see list in Section 10.2. • Links to publications that cite or use the data
* Was data derived from another source?
Dataset/Output Overview
* What is the status of the documented data? – “complete”, “in progress”, or “planned”
* Date of production
* Date of submission/publication
* Are there plans to update the data?
* Keywords that describe the content
* Version number
* Format - Post Script (PDF), Excel (XLSX, CSV), Word (DOC), Power Point (PPT), image (JPEG, PNG, GIF, TIFF).
* Size - MBs
Methodological Information
* Used materials
* Description of methods used for experimental design and data collection
* Methods for processing the data
* Instruments and software used in data collection and processing-specific information needed to interpret the data
* Standards and calibration information, if appropriate
* Environmental/experimental conditions
* Describe any quality-assurance procedures performed on the data
* Dataset benefits/utility
## Increase data re-use (through clarifying licences)
The datasets and outputs will be made available for re-use through publication
on the open access platforms and repositories listed in Section 10.2.
Quality control of the data is the responsibility of the relevant responsible
partner generating the data.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1305_DyViTo_765121.md
|
**Initial Data Management Plan**
_A Data Management Plan created using DMPonline_
Creators: Marina Bloj, Sarah Jones, [email protected]
Affiliation: University of Bradford
Template: European Commission (Horizon 2020)
ORCID iD: 0000-0001-9251-0750
Grant number: 765121 **Project abstract:**
Real world tasks as diverse as drinking tea or operating machines require us
to integrate information across time and the senses rapidly and flexibly.
Understanding how the human brain performs dynamic, multisensory integration
is a key challenge with important applications in creating digital and virtual
environments.
Communication, entertainment and commerce are increasingly reliant on ever
more realistic and immersive virtual worlds that we can modify and manipulate.
Here we bring together multiple perspectives (psychology, computer science,
physics, cognitive science and neuroscience) to address the central challenge
of the perception of material and appearance in dynamic environments. Our goal
is to produce a step change in the industrial challenge of creating virtual
objects that look, feel, move and change like ‘the real thing’. We will
accomplish this through an integrated training programme that will produce a
cohort of young researchers who are able to fluidly translate between the
fundamental neuro-cognitive mechanisms of object and material perception and
diverse applications in virtual reality. The training environment will provide
11 ESRs with cutting-edge, multidisciplinary projects, under the supervision
of experts in visual and haptic perception, neuroimaging, modelling, material
rendering and lighting design. This will provide perceptually-driven advances
in graphical rendering and lighting technology for dynamic interaction with
complex materials.
Central to the fulfilment of the network is the involvement of secondments to
industrial and public outreach partners.
Thus, we aim to produce a new generation of researchers who advance our
understanding of the ‘look and feel’ of real and virtual objects in a
seamlessly multidisciplinary way. Their experience of translating back and
forth between sectors and countries will provide Europe with key innovators in
the developing field of visual-haptic technologies.
Last modified: 08-08-2018
# DATA SUMMARY
Provide a summary of the data addressing the following issues:
* State the purpose of the data collection/generation
* Explain the relation to the objectives of the project
* Specify the types and formats of data generated/collected
* Specify if existing data is being re-used (if any)
* Specify the origin of the data
* State the expected size of the data (if known)
* Outline the data utility: to whom will it be useful
## Purpose
All data will be collected for the purpose of addressing a series of defined
research questions. The over-arching aim of the DyViTo network is to
investigate the creation and perception of complex materials- across senses-
in a dynamically changing environment. In order to do this, we will collect
three major types of data.
Firstly, we will collect data from human participants undertaking a range of
behavioural tests which are designed to measure both visual and haptic
sensitivity to materials. We will use these data to determine the properties
of materials which influence our perception of them. Additionally, we will
collect behavioural data on how our perception of materials is affected by
changes over time (e.g. changes in illumination or rotation angle).
Secondly, we will use fMRI, MEG and EEG techniques to collect neuroimaging
data from human participants while they are engaged in perception of
materials. These data will provide us with understanding of the neural
circuitry which underlies human perception of material properties. Linking
these data with the behavioural data outlined above will provide insight into
the link between human brain function and perception.
Thirdly, in order to collect the data outlined above, we will create novel
stimuli. These new stimuli will take two major forms: computer renderings of
object photographs and computer-generated, virtual materials. We will also
collect data on the physical characteristics of these stimuli which will
include optical measurements (e.g. light scatter) and data on the geometry of
a stimulus layout.
## Relation to Objectives
The DyViTo network is focussed upon three major research objectives:
**RO1.** To understand how humans perceive dynamic changes in shape, material
properties and illumination using novel behavioural and brain imaging
techniques.
The behavioural data that we collect data from human participants performing
tests which measure visual and haptic sensitivity to materials will enable us
to understand material perception. We will collect data that quantifies
changes in performance on these tests which are associated with changes in
material properties (e.g. shape, illumination). These behavioural data will
enable us to understand human perception of dynamic changes in material
properties. To address the second part of this objective, we will employ three
brain imaging techniques: fMRI, MEG and EEG. These neuroimaging data will
enable us to link human perception of material appearance with the neural
circuitry, which underlies these processes.
**RO2.** To measure and model the sensory integration of vision and touch
information as we explore objects to determine their material properties.
We will collect behavioral and brain-imaging data associated with material
perception from two major domains: visual and haptic. Analysis of behavioural
and brain imaging data across these domains will enable us to understand how
the integration of information from both senses influences human perception of
materials. We will use these data to develop models which aim to explain
processing of information from the senses of vision and touch.
**RO3.** To exploit insights from perception science to optimise the
interactive rendering of virtual visual-haptic objects and to support advances
in lighting technology.
We will use the new knowledge generated by the behavioural and brain imaging
data to understand human perception of materials. Based on this understanding,
we will be able to make recommendations on how the parameters of virtual
objects can be manipulated to optimise their appearance and realism for the
human senses of vision and touch. Related to this, our results may be used to
inform the development of lighting technology used to illuminate materials.
## Types and Formats
1. Experimental Measurements
1. Behavioural data. These data comprise numerical measurements (e.g. accuracy, response time, rating scores on the appearance of materials). Data will be stored in ASCII format (i.e. as a .txt file) which is a widely-used method of storing quantitative behavioural data. To make the data as accessible as possible, this file type does not require proprietary software. Data will be organised with meaningful column headings and units.
2. MRI data. These data are large-scale numerical measurements consisting of time series of blood flow measurements obtained from multiple voxels within the participants brain. Raw data for brain-imaging measurements are stored in standard formats (e.g., DICOM, Nifti) and then processed using specalised software packages. Intermediate processing stages involve the use of widely used tools (e.g. Excel) and specialist software (e.g. Matlab) for which there are open source tools to ensure that data files can be read without proprietary software. Data will be meaningfully organised with appropriate file headers and filenames, corresponding to date and place of acquisition, experimental condition and anonymised participant ID.
3. EEG data. These data are large-scale numerical measurements consisting of time series of electrical potential amplitudes, typically recorded with millisecond time resolution at multiple spatial locations simultaneously (e.g. 32-, 64-, 128- or 256- channel locations on an individual participant's scalp). The raw data are typically stored as numerical arrays in binary format (e.g. Matlab *.mat format) which are easily converted to other format. The averaged or otherwise processed data are also readily made accessible in ASCII format with no loss of essential information. Data will be meaningfully organised with appropriate file headers and filenames, corresponding to date and place of acquisition, experimental condition and anonymised participant ID.
2. Audiovisual and haptic data
1. Audiovisual stimuli. These new real and virtual images and/or movies with or without associated sounds will be computer renderings and/or photographs of materials with lighting variations. They will be stored in an appropriate image/video file format.
2. Haptic stimuli. 3D Models of printed objects of natural touch stimuli, description of force patterns produced by haptic devices/actuators
3. Quantitative descriptions of the physical properties of the real world light, materials and object stimuli.
## Existing Data Re-Use
DyViTo’s research questions and scientific approach is novel and aims to
generate new knowledge. Accordingly, the scope for re-use of existing data is
minimal. Nevertheless, we have identified a small number of areas in which re-
use of existing data is appropriate.
1. Existing behavioural data. To inform the development of new experimental protocols, existing behavioural data may be inspected, or subjected to computer simulations. These data were all collected by researchers within our network as part of previous studies. Accordingly, there are no issues surrounding copyright or ownership.
2. Publicly-available videos and images. We will use existing videos and images to analyse the optical properties of depicted materials (e.g. light scatter, reflectance). Media used for this purpose will be free from copyright restrictions and publically available, for example through services suck as Flickr and Wikimedia.
## Origin
1. Behavioural data. These data will be collected as part of controlled experiments performed in a research laboratory. The data are generated by volunteer participants who provide responses to either visual or haptic stimuli.
2. Neuroimaging data. These data will be collected as part of controlled experiments performed in a research laboratory. The data are collected via neuroimaging techniques (fMRI, MEG or EEG) from volunteer participants while they are engaged in a material perception task.
## Size
1. Behavioural data. The size of these text-based data will be small. We estimate that the total will not exceed 90MB (10 MB in each of the 9 institutions).
2. Neuroimaging data. The file size of the neuroimaging data is considerably larger. We estimate that the total will not exceed 500GB.
3. Audiovisual data. Video files of the haptic stimuli (500GB), video/images of visual stimuli (50GB).
## Utility
The behavioral data are of long-term value and will be useful to other
researchers investigating perception of materials. The broad scope of the
DyViTo network means that our data could be used across a range of disciplines
(e.g. psychology, neuroscience, computational modelling). Accordingly, all
behavioural data will be preserved and shared openly.
Similarly, the audiovisual data that we create (haptic stimuli videos and
visual stimuli images/videos) will also be useful to researchers in the fields
of visual and/or haptic sensory perception. We will preserve and make
available these audiovisual data.
We will also share data from our neuroimaging experiments, which will be
useful to researchers investigating the neural mechanisms, which underlie
visual and haptic perception. To respect participant confidentiality,
structural MRI scans are only shared once potentially identifiable information
(e.g., scalp and facial features) have been removed.
# FAIR DATA
**2.1 Making data findable, including provisions for metadata:**
* Outline the discoverability of data (metadata provision)
* Outline the identifiability of data and refer to standard identification mechanism. Do you make use of persistent and unique identifiers such as Digital Object Identifiers?
* Outline naming conventions used
* Outline the approach towards search keyword
* Outline the approach for clear versioning
* Specify standards for metadata creation (if any). If there are no standards in your discipline describe what metadata will be created and how
We have identified the _UK Data Service_ as a potentially suitable repository
and will follow their instructions regarding metadata. We are also considering
alternative respositories, in particular Zenodo, Psychdata and the University
of Cambridge's institutional repository. All repositories, which we are
considering, include provision for the creation of digital object identifiers,
and we will comply with these processes.
In order to make our data discoverable, we will use keywords that are
appropriate within our subject area.
We will using a naming convention which enables the names of data files to
include information pertaining to the date, time and anonymous participant
identifier.
The laboratory of each PI already operates standard practices, which ensure
version control of data files. With respect to code developed during the
project, the consortium will use version control tools (e.g., svn, git), and
will document this code, for a proper maintenance and reuse of it.
Our data will include documentation intended to support re-use. Specifically,
we will include detailed descriptions of experimental methods and set-up, and
use meaningful column headings and units for all data.
**2.2 Making data openly accessible:**
* Specify which data will be made openly available? If some data is kept closed provide rationale for doing so Specify how the data will be made available
* Specify what methods or software tools are needed to access the data? Is documentation about the software needed to access the data included? Is it possible to include the relevant software (e.g. in open source code)?
* Specify where the data and associated metadata, documentation and code are deposited Specify how access will be provided in case there are any restrictions **Which**
1. Behavioural data. We will seek consent from each participant to make available all behavioural data (e.g.
accuracy measurements, reaction time, ratings). All data will be fully
anonymised such that individual participants cannot be identified.
2. Audiovisual data. We will make available all audiovisual data (haptic stimuli videos and stimuli images/videos) created as part of the DyViTo project. We expect that these data will be re-used by other researchers to address new research questions.
3. Neuroimaging data. To respect confidentiality, we cannot make available MRI structural scans of individual participants unless they have been modified to remove information about the scalp and facial features – this is done upon receiving a request for the data. We will make available all other data collected with the MRI and EEG methods.
## How
We will make use of appropriate data repositories for sharing our data. We
have identified the _UK Data Service_ as a potentially suitable repository,
and are also considering alternative despositories, in particular Zenodo,
Psychdata and the University of Cambridge's institutional depository. Using
established repositories will ensure that the data can be curated effectively
beyond the lifetime of the project. The depositories which we are considering
all have experience of curating data relevant to our discipline.
## Methods/Tools Needed to Access Data
All data will be made available with meaningful labels, structured in
appropriate columns and rows and marked with units. Documentation which
describes the format of the data will be included.
Behavioural data will be made available in ASCII format (.txt file) which does
not require proprietary software to access.
Neuroimaging data (DICOM, Nifti) for which open source software tools are
available.
Audiovisual data will be made available in widely acceted and used file
formats (e.g. .tiff, .png, .avi).
**2.3 Making data interoperable:**
1. Assess the interoperability of your data. Specify what data and metadata vocabularies, standards or methodologies you will follow to facilitate interoperability.
2. Specify whether you will be using standard vocabulary for all data types present in your data set, to allow interdisciplinary interoperability? If not, will you provide mapping to more commonly used ontologies?
To ensure that our data can be re-used by as many researchers as possible,
data will be stored and made available in line with standard practices in
experimental psychology.
All data will be well described and supported with clear documentation, which
describes variable names and abbreviations.
We will make use of appropriate data repositories for sharing our data. We
have identified the _UK Data Service_ as a potentially suitable repository,
and are also considering alternative despositories, in particular _Zenodo_ ,
_Psychdata_ and the _University of Cambridge's institutional depository_ .
Each of these depositories has experience of curating data relevant to our
discipline.
We will comply with the established standards and practices for metadata
implemented by these data repositories. We will take advice from data centres
on relevant metadata standards and documentation.
**2.4 Increase data re-use (through clarifying licenses):**
* Specify how the data will be licenced to permit the widest reuse possible
* Specify when the data will be made available for re-use. If applicable, specify why and for what period a data embargo is needed
* Specify whether the data produced and/or used in the project is useable by third parties, in particular after the end of the project? If the re-use of some data is restricted, explain why
* Describe data quality assurance processes
* Specify the length of time for which the data will remain re-usable
Data will be owned by the DyViTo partner responsible for collecting the data.
To permit the widest use possible, we will licence data using an appropriate
and standard licencing mechanism for research data, such as Creative Commons
Attribution-BY (CC-BY) or Open Data Commons Open Database Licence (ODC-ODbL)
licences.
We intend that all valuable data produced by the project will contribute to
publications. Accordingly, data will be made available on the publication date
of the associated article.
Our data will be available for use by third parties, and will remain available
after the end of the project.
# ALLOCATION OF RESOURCES
Explain the allocation of resources, addressing the following issues:
* Estimate the costs for making your data FAIR. Describe how you intend to cover these costs
* Clearly identify responsibilities for data management in your project
* Describe costs and potential value of long term preservation
## Costs
We will make use of established repositories, designed for the purpose of
depositing scientific data, which are appropriate within our discipline. There
is no fee for depositing data within these repositories.
The time required by Principal Investigators (PIs) for the preparation of data
was included in the agreed time commitment within the EU Horizon 2020 grant.
The Project Manager (Callie Lancaster) will support PIs and Early Stage
Researchers (ESRs) by assisting with the administrative aspects of outputs and
data deposits.
## Responsibilities
The DyViTo network is overseen by the Network Co-ordinator (Lead PI), Prof
Marina Bloj. The Network Co-ordinator is supported by a project manager
(Callie Lancaster) who will direct communication and sharing of information
across the network, and manage depository of outputs and data as outlined in
the data management plan. The Network Coordinator and Project Manager are
supported by an Open Science Champion (Dr Andrew Logan) who has responsibility
for overseeing data management across the DyViTo network, and delivery of the
data management plan.
Data collection will take place within the 9 academic institutions which are
affiliated with the DyViTo network. The principal investigator (PI) at each
institution assumes overall responsibility for data management within their
institution.
Day-to-day data collection, processing and analysis will be undertaken by the
ESR assigned to each institution. The ESR will work under direct supervision
of their PI.
Each PI has responsibility for the storage, back-up and quality assurance of
the data collected within their institution.
## Costs and Potential Value
The data collected throughout the DyViTo project will be of long-term value to
researchers investigating perception of materials in the domains of both
vision and touch. Accordingly, we aim to ensure that our data is curated and
preserved in the long-term.
We have identified a number of established data repositories, which are
appropriate for our discipline, as an appropriate method of achieving this
long-term preservation. There is no cost for depositing data within these
repositories. The time required for ESRs (under supervision of the PI) to
prepare data (time estimate: 1 day per publication) for depositing in these
repositories is directly funded by this EU Horizon 2020 grant.
# DATA SECURITY
Address data recovery as well as secure storage and transfer of sensitive data
Data collected will initially be saved to the lab computer’s hard drive. At
the end of each day, all data will be copied to a secure institutional server
which is managed by the each beneficiaries’ insittutional IT team. The secure
institutional server is regularly backed-up. Responsibility for the transfer
of data from the lab computer to the secure institutional server will lie with
the PI at each institution.
In most beneficiary institutions, there is an automated file back-up system.
In cases where this is unavailable, a process is in place to ensure safe
transfer of data at the end of each day.
Lab computers are password-protected and housed in locked rooms. Data stored
on institutional servers are secured by password protection.
To minimise storage of personal data, participants will be identified by a
unique identifier number. This identifier number will be used to denote
individual participants within all data files. One additional file will be
created for each institution which links each unique identifier number with
the participant’s initials and date of birth. This decoder file will be
password-protected, encrypted and saved on the secure institutional server.
At the point of making data available, data can be directly uploaded from the
secure institutional server to the external data repository.
# ETHICAL ASPECTS
To be covered in the context of the ethics review, ethics section of DoA and
ethics deliverables. Include references and related technical aspects if not
covered by the former
**Participant recruitment:** In general, healthy human adult subjects are
recruited from within the population in which the institution is based and
exclusion criteria are designed either to prevent participation of subjects
for whom the equipment is not safe (i.e. someone with back pain would not be
suitable subject for a study requiring them to sit still in front of a
computer responding to visual stimuli via a keyboard) or who suffer from
specific conditions that might compromise the results (colour deficient
individuals would not be suitable participants in a study were colour is one
of the material properties being studied).
In all studies, a gender balanced mix of participants will be sought. In some
experiments, age criteria and handedness will be used to maintain homogeneity
of the sample being studied. All partner institutions have established
mechanisms for recruiting volunteer participants in studies that take into
account issues of inducement (i.e. the financial remuneration offered covers
expenses and compensation for their time while not being so large as to become
a factor in driving participation) and dependant relationships are scrutinised
in accordance with ethics approval processes in place at each site. For each
study, details of their specific recruitment procedure, inclusion/exclusion
criteria and inducement/remuneration are detailed as part of the submission to
the institutional ethics review committee. The DyViTo Ethics Panel will review
these procedures and ethical statements prepared by researchers for their
local institutional ethics committees and keep copies of granted approvals
that will be submitted to Research Executive Agency (REA) upon request.
**Informed consent:** Written informed consent (based on the _UK Data Archive
example consent form_ ) will be obtained from participants before experiments
begin and particular attention paid to ensure that research data can be
curated and made available for future use. At the outset of the experiment it
will be made clear to participants that they participate voluntarily and that
they have the right to withdraw from the research at any time without giving a
reason (while still receiving remuneration for expenses incurred and
compensation for their time). This is particularly important in the case where
volunteers might be participants in a dependant relationship. For example,
some of the volunteers might be students or individuals employed/line managed
by the researchers carrying out the study, in these instances it will be made
explicit via the informed consent procedures and forms that participation or
withdrawal from the study at any time will not lead to any more or less
advantageous treatment of the individuals in question.
In some cases, the ethics review and authorisation might explicitly disallow
the participation of volunteers in dependant relationship and this will be
noted in the informed consent form. All laboratories ensure that the
participants understand the considerations involved in informed consent. In
general, information about the experiment, its risks and exclusion criteria
will be given to the potential volunteers 24 hours before informed consent can
be given and again at the beginning of the experiment. By allowing a waiting
period between receiving the information and giving informed consent,
volunteers can carefully consider, without pressure, if they want to
participate. In the process of obtaining written consent, the experimenter
interviews the participant to ensure that they understand to what they have
agreed and that they are comfortable with the protocols as explained to them.
In addition to written information, participants will be given a verbal
explanation of the experiments.
For each study details of their specific informed consent procedures are
detailed as part of the submission to the institutional ethics review
committee. The DyViTo Ethics Panel will review these procedures and ethical
statements prepared by researchers for their local institutional ethics
committees and keep copies of granted approvals that will be submitted to
Research Executive Agency (REA) upon request. Template informed consent forms
and detailed information sheets will be prepared for each study and approved
by the appropriate institutional ethics committees and reviewed by the DyViTo
Ethics Panel. Templates of the informed consent forms and information sheets
will be kept in file by the DyViTo Ethics Panel and submitted to the REA on
request.
**Protection of personal data** : All participants in the DyViTo network are
fully committed to following EU, national (British, German, Dutch, Spanish and
Turkish), and institutional (Universities and Museums) ethical standards and
data protection legislation applicable to the collection and protection of
sensitive personal data, when such data is collected. The DyViTo Ethics Panel
will ensure that copies of opinion or confirmation by the competent
Institutional Data Protection Officer and/or authorization or notification by
the National Data Protection Authority will be obtained, kept in the file and
submitted upon request (whichever applies). The studies carried out by the
network do not involve extensive collection of personal data. This will
generally be limited to information such as name, age, gender, handness and in
some cases minimal details of relevant medical history. The later will be
collected for the sole purpose of establishing if inclusion/exclusion criteria
are met, for example to prevent participation of subjects for whom the
equipment is not safe (i.e. someone with back pain would not be suitable
subject for a study requiring them to sit still in front of a computer
responding to visual stimuli via a keyboard) or who suffer from specific
conditions that might compromise the results (colour deficient individuals
would not be suitable participants in a study were colour is one of the
material properties being studied).
In all cases, participants will give written informed consent of the storage,
preservation and anonymised use (including sharing) of their data. Data
privacy will be guaranteed, because the process of anonymisation will occur at
the level of the data-entry. Data will be held on the confidential password-
protected database. In all cases, the participants name will be replaced by a
code that will make any direct identification impossible. Therefore, no
medical or health records or other sensitive information can be re-identified
by a non- authorised administrator. No identifying personal information will
be stored in combination with actual data. Each participant will be assigned a
code. A data sheet, linking subject identifying information (initials and date
of birth) with codes, will be maintained in an encrypted file stored on the
secure institutional server.
Consent forms inform subjects that data can be shared, stripped of any
identifying information. Group average data will be published and made
available to the public under normal guidelines of good practice for
publishing and data sharing taking into account Data Protection Directive (EC
Directive 95/46/EC, and the national law) and will fully comply with national
and EU legislation. Some studies carried out by the DyViTo network involve the
collection of a small amount of personal data that might not be publicly
available. This will generally be limited to information such as name, age,
gender, handness and occasionally minimal details of the participants’
relevant medical history. In all cases we will seek relevant authorisation
before data collection and keep copy of authorisations for submission to the
REA on request.
Detailed information on the informed consent procedures that will be
implemented for the collection, storage and protection of personal data will
be prepared for each applicable study and approved by the appropriate
institutional ethics committees and reviewed by the DyViTo Ethics Panel. This
information will be kept in file by the DyViTo Ethics Panel and submitted to
the REA on request.
**Third countries:** The only third country were research will be carried out
is in Turkey, an associate state. The Project Legal Signatory for Bilkent
Üniversitesi confirms that all the EU ethical standards and guidelines as
described by the Horizon2020 program will be rigorously applied for all the
research carried out in Turkey.
No materials will be imported from Turkey to EU; and no materials will be
exported from EU to a third country. No local resources with ethical issues
(e.g. animal and/or human tissue, samples, genetic material, live, animals,
human remains, materials of historical value, endangered fauna or flora
samples, traditional knowledge etc.) will be used in the research.
The research data collected in Turkey and shared with EU network partners (or
collected in the EU and shared with our Turkish partners) as part of this
project does not include sensitive personal information. It will be anonymised
at the point of recording, managed and stored by Bilkent Üniversitesi in full
accordance with the DyViTo's Data Management Plan which takes into account
Data Protection Directive (EC Directive 95/46/EC).
# OTHER
Refer to other national/funder/sectorial/departmental procedures for data
management that you are using (if any)
Our data management plan seeks to comply with the data management policies of
the beneficiaries, where one is available:
_University of Cambridge_
_Newcastle University_
_TU Delft_
_University of Southampton_
These data management policies may require beneficiaries to undertake
additional steps to those outlined within the DyViTo data management plan.
Institutional support and training with data management is available to
beneficiaries within the above institutions.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1308_INSIGHTS_765710.md
|
# 2\. FAIR data
## 2\. 1. Making data findable, including provisions for metadata
In accordance with the computing models described above, the data are
discoverable with metadata and they are identifiable. The naming conventions
and other details are specific to each experiment.
The data are naturally divided into data sets (either by a physics theme, or a
run period). These are catalogued in the experimental cataloguing systems
running across the WLCG. These systems index physical file locations, logical
file names, and the associated metadata which describes each data set.
In the ATLAS case, Rucio will handle replicating files to multiple sites. For
the web front end there are multiple servers (stateless). Behind that,
resilience and redundancy is provided by Oracle with the usual RAC
configuration and a Data Guard copy to another set of machines.
The CMS Dataset Bookkeeping Service (DBS) holds details of data sets,
including metadata, and includes mappings from physics abstractions to file
blocks. Multiple instances of DBS are hosted at CERN. The transfer of data
files amongst the many CMS sites is managed by PhEDEx (Physics Experiment Data
EXport), which has its own Transfer Management Database (TMDB), hosted on
resilient Oracle instances at CERN. Logical filenames are translated to
physical filenames at individual sites in the Trivial File Catalogue (TFC).
Software is equally important in the LHC context. The knowledge needed to read
and reconstruct Raw data, and to subsequently read and analyse the derived
data is embedded in large software suites and in databases which record
conditions and calibration constants. All such software and databases are
versioned and stored in relevant version management systems. Currently SVN and
GIT are used. All experiments store information required to link specific
software versions to specific analyses. All software required to read and
interpret open data will be made available upon request according to the
policies of the experiments.
### 2.2. Making data openly accessible
Each experiment has produced policies with respect to open data preservation &
access. These are the result of agreement between the full set of
international partners of each experiment. These can be found at:
* _http://opendata.cern.ch/search?cc=Data-Policies_
Data preservation and open data access issues have also been developed through
pan experimental initiatives at CERN which include:
* Open data access: _http://opendata.cern.ch/_
* DPHEP Community study group: _http://www.dphep.org/_
### 2.3. Making data interoperable
CERN and the experiments have taken open data access very seriously, and all
of the experiments have developed policies, referenced above, which contain
further details. These specify:
**Which data is valuable to others:** In general Raw (level 4) data could not
be interpreted by third parties without them having a very detailed knowledge
of the experimental detectors and reconstruction software (such data is rarely
used directly by physicists with the collaborations). Derived data (level 3)
may be more easily usable by third parties. Level-1 data is already openly
available.
**The proprietary period:** Experiments specify a fraction of the data that
will be made available after a given reserved period. This period ranges up to
several years reflecting the very large amount of effort expended by
scientists in construction and operation of the experiments over many decades,
and in part following the running cycle that defines large coherent blocks of
data. For details of periods and amounts of data release please see the
individual experiment policies.
**How will data be shared:** Data will be made available in format as
specified by the normal experiment operations. This may be exactly the same
format in which the data are made available to members of the collaborations
themselves. The software required to read the data is also available on a
similar basis, along with appropriate documentation.
CERN, in collaboration with the experiments, has developed an Open Data
Portal. This allows experiments to publish data for open access for research
and for education. The portal offers several high-level tools such as an
interactive event display and histogram plotting. The CERN Open Data platform
also preserves the software tools used to analyse the data. It offers the
download of Virtual Machine images and preserves examples of user analysis
code. CMS is already about to use this for data release according to its
policy. The portal can be found at _http://opendata.cern.ch/_ .
In some cases individual experiments have also taken the initiative to develop
or engage with value added open data services using resources obtained in
participating countries. In ATLAS Recast and RIVET are the recommended means
for reinterpretation of the data by third parties. They have also developed
light-weight packages for exploring self-describing data formats intended
mainly for education and outreach.
### 2.4. Increase data re-use (through clarifying licences)
The CERN Open Data Portal products are shared under open licenses. Further
details can be found at _http://opendata.cern.ch/_ .
Use of CERN Open Portal data in subsequent publications is allowed in
accordance with the _FORCE 11 Joint Declaration of Data Citation_ Principles.
# 3\. Allocation of resources
Both the ATLAS and CMS experiments carry out their data preservation
activities as a natural result of scientific good practice. This leads to
marginal extra staff costs over and above those operating the WLCG, and some
additional storage costs. Naturally these activities rely upon the
continuation of CERN and the remote sites.
Experiments in general do not have any specific resources for carrying out
active open data access activities over and above those described above.
The additional cost of storage for data preservation is starting to be
specified in the annual resource projections of each experiment which go to
the CERN RRB and which are scrutinised and subsequently approved.
# 4\. Data security
The preservation of data follows the following basic principles:
* Level-4 data is fundamental and must be preserved as all other data may, in principle, be derived from it by re-running the reconstruction.
* Some Level-3 data is also preserved. This is done for efficiency and economy since the process to re-derive it may take significant computing resources, and in order to easily facilitate re-analysis, re-use and verification of results.
* Level-2 data has no unique preservation requirement
* Level-1 data is preserved in the journals, and additional data is made available through recognised repositories such as CERN CDS and HEPDATA
* MC data can in principle always be regenerated provided the software and the associated transforms have been preserved (see later). However out of prudence some MC data is also preserved along with associated real data.
The preservation of Level-3 and Level-4 data is guaranteed by the data
management processes of the LHC experiments. The LHC experiments use the
Worldwide LHC Computing Grid (WLCG) to implement those processes. The exact
details are different for each experiment, but broadly speaking the process is
as follows:
* The Raw data is passed from the experimental areas in near real time to the CERN Tier0 data centre where it is immediately stored onto tape.
* CERN has a remote Tier-0 centre in Hungary (Wigner Centre) which provides resilience.
* At least a second tape copy of the Raw data is made shortly afterwards. This second copy is stored at other sites remote to CERN, typically the Tier-1 data centres. The details and number of copies depend upon the detailed computing model of each experiment but the result is resilient copies of the Raw data spread around the world.
* The CERN and remote data centres have custodial obligations for the Raw data and guarantee to manage them indefinitely, including migration to new technologies.
* Level-3 data is derived by running reconstruction programs. Level-3 data is also split up into separate streams optimised for different physics research areas. These data are mostly kept on nearline disk, which is replicated to several remote sites according to experiment replication policies which take account of popularity. One or more copies of this derived data will also be stored on tape.
In summary several copies of the Raw data are maintained in physically remote
locations, at sites with custodial responsibilities.
CERN has developed an analysis preservation system. This allows completed
analyses to be uploaded and made available for future reference. This includes
files, notes, ntuple type data sets, and software extracted from SVN or GIT.
This is already being used by the experiments to deposit completed analyses.
This can be viewed at _http://data.cern.ch/_ (although as it pertains to
internal analysis preservation a valid credential is required).
The wider HEP community has been working together in a collaboration to
develop data preservation methods under the DPHEP study group (Data
Preservation for HEP). The objectives of the group includes (i) to review and
document the physics objectives of the data persistency in HEP (ii) to
exchange information concerning the analysis model: abstraction, software,
documentation etc. and identify coherence points, (iii) to address the
hardware and software persistency status (iv) to review possible funding
programs and other related international initiatives and (v) to converge to a
common set of specifications in a document that will constitute the basis for
future collaborations. More on DPHEP can be found at _http://www.dphep.org/_
.
# 5\. Ethical aspects
LHC data have no connection to individuals and thus have no relation to
personal data or privacy issues.
Ethics issues related to the project of ESR8 at Pangea Formazione have been
described in the deliverables 8.1 (D53) and 8.3 (D55) (cf. Sec. 6 below).
# 6\. Other issues
The INSIGHTS ITN includes several smaller projects not described above: the
AWAKE experiment at CERN, the NEWSdm experiment at Gran Sasso, Italy, and a
project related to traffic flow carried out at the Italian SME Pangea
Formazione.
As a CERN experiment, AWAKE adheres to that organisation's general rules
relating to data management. For the INSIGHTS ESR working on this experiment,
special data sets were recorded that are only expected to be of value for the
ESR and to another student working on the AWAKE project; there is thus no
significant need to make the raw data public.
The data management of NEWSdm is bound to the policies of the Italian National
Institute for Nuclear Physics (INFN). The experiment uses the CNAF site in
Bologna (the INFN national centre for data processing and computing
technology, _https://www.cnaf.infn.it/en/_ ) for data storage and
centralized computing.
The traffic data used by Pangea Formazione is based on material hosted on
public github servers and released either in the public domain or under
permissive licenses like Creative Commons (CC0, CC-BY, CC-BY-SA), BSD or MIT.
Ethics issues related to these data (privacy and potential dual use) were
addressed in deliverables 8.1 (D53) and 8.3 (D55).
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1310_MAGISTER_766264.md
|
# Introduction
The Data Management Plan is the final deliverable in the Dissemination Work
Package.
In this deliverable each partner will describe how they will handle their data
sets, how they will address sensitive issues, and future ideas on how to make
the data sets as open as possible, working with the FAIR principles as set out
by the EC.
At the moment we do not foresee any issues with personal data, although then
they arise, the consortium partners will handle these data in line with the
GDPR.
The data generated within this project can contain sensitive information, in
terms of IPR. In this document we will describe what the guidelines are for
handling this kind of data.
The DMP is a deliverable that was due at Month 6. At that point in time, only
a few ESR’s had started their work, so we chose to delay the DMP until more
ESR were appointed. At this moment, we do not have a clear picture in mind on
what data can be opened up and which data will need to be kept confidential.
Therefore, we have chosen to make this DMP a living document, since we foresee
that with the progress of the project, insights into which data can be opened
up and how to apply the FAIR principles will be gained.
In Magister 15 ESR’s will receive their training at various institutions. In
Table 1 an overview of the ESR can be found and their starting dates. From
this table, it shows that at the point of writing only eight ESR’s have
started their research.
TABLE 1: OVERVIEW OF ESR'S, STARTING DATES AND CONTRIBUTION TO D7.8
<table>
<tr>
<th>
ESR nr
</th>
<th>
Name
</th>
<th>
Start
</th>
<th>
In this deliverable?
</th> </tr>
<tr>
<td>
1
</td>
<td>
Ushnish Sengupta
</td>
<td>
September 1 st , 2018
</td>
<td>
No
</td> </tr>
<tr>
<td>
2
</td>
<td>
Nils Wilhelmsen
</td>
<td>
September 1 st , 2018
</td>
<td>
No
</td> </tr>
<tr>
<td>
3
</td>
<td>
Nilam Tathawadekar
</td>
<td>
September 1 st , 2018
</td>
<td>
No
</td> </tr>
<tr>
<td>
4
</td>
<td>
Louise da Costa Ramos
</td>
<td>
September 1 st , 2018
</td>
<td>
No
</td> </tr>
<tr>
<td>
5
</td>
<td>
Sagar Kulkarni
</td>
<td>
July 1 st , 2018
</td>
<td>
No
</td> </tr>
<tr>
<td>
6
</td>
<td>
Varun Shastry
</td>
<td>
February 5 th , 2018
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
7
</td>
<td>
Alireza Ghasemi Khourinia
</td>
<td>
May 1 st 2018
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
8
</td>
<td>
Francesco Garita
</td>
<td>
January 15 th , 2018
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
9
</td>
<td>
Alireza Javareshkian
</td>
<td>
March 15 th , 2018
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
10
</td>
<td>
Edmond Shehadi
</td>
<td>
April 15 th , 2018
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
11
</td>
<td>
Thomas Christou
</td>
<td>
April 15 th , 2018
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
12
</td>
<td>
Sara Navarro Arredondo
</td>
<td>
May 15 th , 2018
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
13
</td>
<td>
Michael McCartney
</td>
<td>
January 1 st , 2018
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
14
</td>
<td>
Thomas Lafarge
</td>
<td>
September 24 th , 2018
</td>
<td>
No
</td> </tr>
<tr>
<td>
15
</td>
<td>
Pasquale Agostinelli
</td>
<td>
October 1 st , 2018
</td>
<td>
No
</td> </tr> </table>
Each ESR has provided their own data management plan. These plans can be found
in the following chapters.
# Objectives
The objectives of this deliverable are threefold:
* Set out project guidelines for researchers on working with their data during their research.
* Help researchers to open their data according to the FAIR principles. This will be done in a later stage when there is more information available about what kind of data set there will be generated.
* Describe in more detail how each ESR will handle their data. In this version of the deliverable, eight ESR will give as detailed a description as possible.
# Data management guidelines
At this moment, we do not have a clear picture on which data will be generated
during the project.
Furthermore, we cannot foresee which data must be kept confidential and which
data can be stored in open repositories. Our expectation is that definitely
not all data can be stored in open repositories since the companies involved
in the project are keen on keeping their data private. Therefore this section
will be updated when more information becomes available.
For this moment, we will set out provisional guidelines for the consortium on
working with (sensitive) data in general and more specific on working with the
fair principles:
## General guidelines
1. First of all the data management plan should be in line with the data management policy of your host institution.
2. If you are working with (sensitive) data from (associated) companies, make sure you are aware of and following their rules on data management. This will probably include rules on storage, access, sharing, and open data.
3. Furthermore in line with EU regulations, we like to make data as openly accessible as possible.
This means three things:
* If you publish a paper, try to do it in open access journal (Gold or Green). But always give the consortium the opportunity to check on the information you will be publishing.
* When the research is finished, try to open up the datasets as much as possible, for other researchers to re-use. Deposit the dataset in a repository with a data seal of approval. (If you have a valid reason not to open it up, such as company policies or explicit use of personal data, keep the data securely stored and closed.)
* When opening up the data set, keep in line with the FAIR principles (read more in chapter 3.2).
4. If you are going to process personal data, ask project management about GDPR guidelines.
(We do not expect this to happen during this project, but keep it in the back
of your mind.)
5. Each data file should be accompanied by a data description file in which the data set and its variables are described.
6. If you are going to “clean up” your data file, make sure to describe which steps you’ve taken and how many entries you’ve adjusted.
7. At the end of an ESR’s working period, we expect a manual to be written for access to the data after ending of the ESR’s work.
8. If you want to share information within the project, there is a Box solution available at GE. Ask project management about access and rules. (To prevent security breaches, do not setup a separate (cloud) sharing solution like Dropbox on your own.)
9. If you are going to carry around sensitive information, always make sure the information carrier is encrypted and password secured.
10. Last but not least: If you have any questions concerning your data management, do not hesitate to ask your supervisor or the project management.
## Fair principles and opening up the data
### 3.2.1 General
We intend to use the data specialists that are working at the partners within
the consortium, since at researchers level we foresee a lack of specialized
knowledge on the FAIR principles. At the UT, the library and archiving
department can provide specialized support on data management, including Open
Access, Open Data, and Archiving.
### 3.2.2 Findable
Each dataset will get a unique Digital Object Identifier (DOI).
Within the project, naming conventions will be used to cooperate on research
and to make dataset as easily findable as possible. When storing the dataset
in a trusted repository, the name might be adapted for better findability.
Appropriate keywords will be used in line with the field of research and the
content of the datasets.
### 3.2.3 Accessible
Our intention is to make as much data openly accessible as possible, within
the boundaries of company confidentiality, IPR, and ethical issues as covered
by the consortium agreement and in the ethics deliverables. Data will be
stored in a trusted repository, this can be either at the partner or for
example in DANS or 4TU data Centre as long as the repository has a data seal
of approval.
In case access will need to be limited, a repository will be used in which
restricted access can be provided. If a dataset is going to be stored with
restricted access, a Data Access Committee will be set up in the final stages
of the project. They may decide on a case to case basis if access will be
granted and for which period.
### 3.2.4 Interoperability
Metadata will be used to enhance the interoperability. At the moment we are
looking into Datacite and Dublin Core. Depending on specific fields of
research additional meta data standards can be used.
### 3.2.5 Re-use
Data sets will be licensed under and Open Access License as much as possible,
to enhance re-use for third parties. This will however depend on the company
policies, IPR, and ethical issues involved. If a dataset contains IPR or other
exploitable results, an embargo period might be set.
All data sets will have a clear naming convention, will be cleared of bad
records, and appropriate meta data and keywords will be added.
# Data per partner
Each ESR will describe how they will handle their data in the following table.
## ESR 6: Varun Shastry (CERFACS)
<table>
<tr>
<th>
1\. Data summary
</th>
<th>
1\.
</th>
<th>
State the purpose of the data collection/generation
_Data collection, generation for validating and developing models related to
spray dynamics and spray combustion._
</th> </tr>
<tr>
<td>
</td>
<td>
2\.
</td>
<td>
Explain the relation to the objectives of the project
_The main objective of the PhD thesis is to study thermoacoustics in spray
flames, hence the data as answered previously._
</td> </tr>
<tr>
<td>
</td>
<td>
3\.
</td>
<td>
Specify the types and formats of data generated/collected
_Data related to experimental setups, conditions and experimental results._
</td> </tr>
<tr>
<td>
</td>
<td>
4\.
</td>
<td>
Specify if existing data is being re-used (if any) _No pre-existing data is
being used currently._
</td> </tr>
<tr>
<td>
</td>
<td>
5\.
</td>
<td>
Specify the origin of the data
_Not applicable as of now because of previous answer._
</td> </tr>
<tr>
<td>
</td>
<td>
6\.
</td>
<td>
State the expected size of the data (if known) _Not applicable as data is not
finalised._
</td> </tr>
<tr>
<td>
</td>
<td>
7\.
</td>
<td>
Outline the data utility: to whom will it be useful
_If made available, will be useful to other ESRs working in similar areas._
</td> </tr>
<tr>
<td>
2\. Allocation of resources
</td>
<td>
1\.
</td>
<td>
Estimate the costs for making your data FAIR. Describe how you intend to cover
these costs
_Costs estimates have not been discussed and will be done at a later stage._
</td> </tr>
<tr>
<td>
</td>
<td>
2\.
</td>
<td>
Clearly identify responsibilities for data management in your project
_General data handling guidelines of CERFACS apply. Additional
responsibilities will be added as the project moves forward._
</td> </tr>
<tr>
<td>
</td>
<td>
3\.
</td>
<td>
Describe costs and potential value of long term preservation
_Cost estimates for long term will be discussed at the end of the project._
</td> </tr> </table>
<table>
<tr>
<th>
3\. Data security
</th>
<th>
1\.
</th>
<th>
Address data recovery as well as secure storage and transfer of sensitive data
_Data will be stored based on CERFACS guidelines._
</th> </tr>
<tr>
<td>
4\. Ethical aspects
</td>
<td>
1\.
</td>
<td>
To be covered in the context of the ethics review, ethics section of DoA and
ethics deliverables. Include references and related technical aspects if not
covered by the former.
_Not applicable as of now._
</td> </tr>
<tr>
<td>
5. FAIR Data
5.1. Making data findable, including provisions for metadata
</td>
<td>
1\.
2\.
3\.
</td>
<td>
Outline the discoverability of data (metadata provision)
Outline the identifiability of data and refer to standard identification
mechanism. Do you make use of persistent and unique identifiers such as
Digital Object Identifiers?
Outline which naming conventions will be used
</td> </tr>
<tr>
<td>
</td>
<td>
4\.
</td>
<td>
Outline the approach towards search keyword
</td> </tr>
<tr>
<td>
</td>
<td>
5\.
</td>
<td>
Outline the approach for clear versioning
</td> </tr>
<tr>
<td>
</td>
<td>
6\.
</td>
<td>
Specify standards for metadata creation (if any). If there are no standards in
your discipline describe what type of metadata will be created and how
_All above questions are not relevant at present moment. Will be updated
later._
</td> </tr>
<tr>
<td>
5.2 Making data openly accessible
</td>
<td>
1\.
</td>
<td>
Specify which data will be made openly available? If some data is kept closed
provide rationale for doing so
</td> </tr>
<tr>
<td>
</td>
<td>
2\.
</td>
<td>
Specify how the data will be made available
</td> </tr>
<tr>
<td>
</td>
<td>
3\.
</td>
<td>
Specify what methods or software tools are needed to access the data? Is
documentation about the software needed to access the data included? Is it
possible to include the relevant software (e.g. in open source code)?
</td> </tr>
<tr>
<td>
</td>
<td>
4\.
</td>
<td>
Specify where the data and associated metadata, documentation and code are
deposited
</td> </tr>
<tr>
<td>
</td>
<td>
5\.
</td>
<td>
Specify how access will be provided in case there are any restrictions
_All above questions are not relevant at present moment. Will be updated
later._
</td> </tr>
<tr>
<td>
5.3. Making data interoperable
</td>
<td>
1\.
</td>
<td>
Assess the interoperability of your data. Specify what data and metadata
vocabularies, standards or methodologies you will follow to facilitate
interoperability.
</td> </tr>
<tr>
<td>
</td>
<td>
2\.
</td>
<td>
Specify whether you will be using standard vocabulary for all data types
present in your data set, to allow inter-disciplinary
</td> </tr>
<tr>
<td>
</td>
<td>
interoperability? If not, will you provide mapping to more commonly used
ontologies?
_All above questions are not relevant at present moment. Will be updated
later._
</td> </tr>
<tr>
<td>
5.4. Increase data reuse (through clarifying licences)
</td>
<td>
1. Specify how the data will be licenced to permit the widest reuse possible
2. Specify when the data will be made available for re-use. If applicable, specify why and for what period a data embargo is needed
3. Specify whether the data produced and/or used in the project is useable by third parties, in particular after the end of the project? If the re-use of some data is restricted, explain why
4. Describe data quality assurance processes
5. Specify the length of time for which the data will remain re-usable
_All above questions are not relevant at present moment. Will be updated
later._
</td> </tr>
<tr>
<td>
6\. Other
</td>
<td>
Refer to other national/funder/sectorial/departmental procedures for data
management that you are using (if any)
_CERFACS rules for data management will be the primary set of guidelines to be
followed._
</td> </tr> </table>
## ESR 7: Alireza Ghasemi Khourinia (UT)
<table>
<tr>
<th>
1\. Data summary
</th>
<th>
1\.
</th>
<th>
State the purpose of the data collection/generation
_Flow simulation results to be stored for further post processing and the
verification and validation process._
</th> </tr>
<tr>
<td>
</td>
<td>
2\.
</td>
<td>
Explain the relation to the objectives of the project
_Simulation results are to be analysed and used to draw conclusions_
</td> </tr>
<tr>
<td>
</td>
<td>
3\.
</td>
<td>
Specify the types and formats of data generated/collected _raw data in ASCI
format. Documents in pdf. Media files generated as movies and pictures._
</td> </tr>
<tr>
<td>
</td>
<td>
4\.
</td>
<td>
Specify if existing data is being re-used (if any)
_Source code of SU2 as available through Github_
</td> </tr>
<tr>
<td>
</td>
<td>
5\.
</td>
<td>
Specify the origin of the data
_Data is mainly generated in house_
</td> </tr>
<tr>
<td>
</td>
<td>
6\.
</td>
<td>
State the expected size of the data (if known) _in the order of terabytes_
</td> </tr>
<tr>
<td>
</td>
<td>
7\.
</td>
<td>
Outline the data utility: to whom will it be useful _researchers in the field
of CFD and combustion_
</td> </tr>
<tr>
<td>
2\. Allocation of resources
</td>
<td>
1\.
</td>
<td>
Estimate the costs for making your data FAIR. Describe how you intend to cover
these costs _TBD_
</td> </tr>
<tr>
<td>
</td>
<td>
2\.
</td>
<td>
Clearly identify responsibilities for data management in your project
_responsibilities are on ESR7 to manage the data in collaboration with his
supervisor and under the data management of UT_
</td> </tr>
<tr>
<td>
</td>
<td>
3\.
</td>
<td>
Describe costs and potential value of long term preservation _external hard
drive and use of cloud storage should we choose to use it_
</td> </tr>
<tr>
<td>
3\. Data security
</td>
<td>
1\.
</td>
<td>
Address data recovery as well as secure storage and transfer of sensitive data
_redundant backups to be made of the data._
</td> </tr>
<tr>
<td>
4\. Ethical aspects
</td>
<td>
1\.
</td>
<td>
To be covered in the context of the ethics review, ethics section of DoA and
ethics deliverables. Include references and related technical aspects if not
covered by the former.
_decided based on Magister project policies and UT data management policies_
</td> </tr>
<tr>
<td>
5. FAIR Data
5.1. Making data findable, including
</td>
<td>
1\.
</td>
<td>
Outline the discoverability of data (metadata provision) _decided based on
Magister project policies and UT data management policies_
</td> </tr>
<tr>
<td>
</td>
<td>
2\.
</td>
<td>
Outline the identifiability of data and refer to standard identification
mechanism. Do you make use of persistent and
</td> </tr> </table>
<table>
<tr>
<th>
provisions for metadata
</th>
<th>
</th>
<th>
unique identifiers such as Digital Object Identifiers?
_doi will be used for publications only_
</th> </tr>
<tr>
<td>
</td>
<td>
3\.
</td>
<td>
Outline which naming conventions will be used
_the format will be chosen to reflect the content and date of data generation_
</td> </tr>
<tr>
<td>
</td>
<td>
4\.
</td>
<td>
Outline the approach towards search keyword _will be provided within
publications only_
</td> </tr>
<tr>
<td>
</td>
<td>
5\.
</td>
<td>
Outline the approach for clear versioning
_Integer whole number versions with decimal points for iterative changes_
</td> </tr>
<tr>
<td>
</td>
<td>
6\.
</td>
<td>
Specify standards for metadata creation (if any). If there are no standards in
your discipline describe what type of metadata will be created and how
_readme files might be included for better clarifications_
</td> </tr>
<tr>
<td>
5.2 Making data openly accessible
</td>
<td>
1\.
</td>
<td>
Specify which data will be made openly available? If some data is kept closed
provide rationale for doing so
_the data will be made available to whom it might be of interest upon contact.
decided based on Magister project policies and UT data management policies_
</td> </tr>
<tr>
<td>
</td>
<td>
2\.
</td>
<td>
Specify how the data will be made available
_available upon request in forms of hard copy and cloud storage if possible.
Decided based on Magister project policies and UT data management policies._
</td> </tr>
<tr>
<td>
</td>
<td>
3\.
</td>
<td>
Specify what methods or software tools are needed to access the data? Is
documentation about the software needed to access the data included? Is it
possible to include the relevant software (e.g.
in open source code)?
_The data is mostly accessible in .vtk format as well as SU2 binaries._
_SU2 is an open source code available to public via GitHub._
</td> </tr>
<tr>
<td>
</td>
<td>
4\.
</td>
<td>
Specify where the data and associated metadata, documentation and code are
deposited
_stored in external hard disks as well as a secondary redundant backup_
</td> </tr>
<tr>
<td>
</td>
<td>
5\.
</td>
<td>
Specify how access will be provided in case there are any restrictions
_the data will be made available to individuals or organizations interested
upon contact should the sharing process is aligned with the policies of the
Magister project and UT. decided based on Magister project policies and UT
data management policies_
</td> </tr>
<tr>
<td>
5.3. Making data interoperable
</td>
<td>
1\.
</td>
<td>
Assess the interoperability of your data. Specify what data and metadata
vocabularies, standards or methodologies you will follow to facilitate
interoperability.
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
_Data is readable through open source means. Proper naming schemes and readme
files will assist further interoperability_
</td> </tr>
<tr>
<td>
</td>
<td>
2\.
</td>
<td>
Specify whether you will be using standard vocabulary for all data types
present in your data set, to allow inter-disciplinary interoperability? If
not, will you provide mapping to more commonly used ontologies?
_data is readable through open source means_
</td> </tr>
<tr>
<td>
5.4. Increase data reuse (through clarifying licences)
</td>
<td>
1\.
</td>
<td>
Specify how the data will be licenced to permit the widest reuse possible
_decided based on Magister project policies and UT data management policies_
</td> </tr>
<tr>
<td>
</td>
<td>
2\.
</td>
<td>
Specify when the data will be made available for re-use. If applicable,
specify why and for what period a data embargo is needed
_decided based on Magister project policies and UT data management policies_
</td> </tr>
<tr>
<td>
</td>
<td>
3\.
</td>
<td>
Specify whether the data produced and/or used in the project is useable by
third parties, in particular after the end of the project? If the re-use of
some data is restricted, explain why _decided based on Magister project
policies and UT data management policies_
</td> </tr>
<tr>
<td>
</td>
<td>
4\.
</td>
<td>
Describe data quality assurance processes
_redundant backups are to be made of the data for further assurance_
</td> </tr>
<tr>
<td>
</td>
<td>
5\.
</td>
<td>
Specify the length of time for which the data will remain re-usable _decided
based on Magister project policies and UT data management policies_
</td> </tr>
<tr>
<td>
6\. Other
</td>
<td>
Refer to other national/funder/sectorial/departmental procedures for data
management that you are using (if any) _In alignment with data policies of UT_
</td> </tr> </table>
## ESR 8: Francesco Garita (University of Cambridge)
<table>
<tr>
<th>
1\. Data summary
</th>
<th>
1\.
</th>
<th>
Data collection/generation is a fundamental aspect of research, especially in
engineering. A deep understanding of physical phenomena is based on data
production/acquisition.
</th> </tr>
<tr>
<td>
</td>
<td>
2\.
</td>
<td>
The aim of the project is to improve our understanding on thermoacoustic
instabilities in gas turbines. This can only be achieved by combining low-
order models and numerical simulations with experimental data. Thus, data
collection/generation is a fundamental aspect of this project.
</td> </tr>
<tr>
<td>
</td>
<td>
3\.
</td>
<td>
At the moment, the raw data that have been produced are simply text files with
“.dat” extension.
</td> </tr>
<tr>
<td>
</td>
<td>
4\.
</td>
<td>
At the moment, no existing data have been re-used.
</td> </tr>
<tr>
<td>
</td>
<td>
5\.
</td>
<td>
The data produced so far come from experiments carried out on a laboratory rig
installed in the Engineering Department. At a later stage, data will be
acquired also from other sources (e.g. Rolls Royce, etc.). Details will be
provided at a later stage.
</td> </tr>
<tr>
<td>
</td>
<td>
6\.
</td>
<td>
The data produced so far are not larger than 1 GB. This number will of course
change over time, therefore it is not possible to provide at the moment an
expected size for the data produced at the end of the project.
</td> </tr>
<tr>
<td>
</td>
<td>
7\.
</td>
<td>
The data produced during this project will be used to publish papers. In
addition to this, some people working in the Engineering Department of the
Cambridge University may use them for their purposes – in this case a
collaboration will be established. Eventually, other MAGISTER ESRs may benefit
from these data as well, even though at the moment no partnership has been
established.
</td> </tr>
<tr>
<td>
2\. Allocation of resources
</td>
<td>
1\.
</td>
<td>
At the moments I do not expect any cost for making my data FAIR. This will
become relevant only at a later stage when/if very large datasets will be
produced.
</td> </tr>
<tr>
<td>
</td>
<td>
2\.
</td>
<td>
I (Francesco Garita) will be the main person responsible for data management.
Following me, my supervisor (Prof. Matthew Juniper) and, more in general, the
University of Cambridge will have responsibilities as well.
</td> </tr>
<tr>
<td>
</td>
<td>
3\.
</td>
<td>
Again, this will be a relevant problem only for very large datasets. At the
moment, I do not consider this to be a problem. My supervisor (Prof. Matthew
Juniper) has offered his personal website for data storage. Alternatively, the
official UCAM’s data server can be used ( _www.data.cam.ac.uk_ ) . In either
case, this will not have any cost in my opinion, unless we deal with very
large datasets.
</td> </tr> </table>
<table>
<tr>
<th>
3\. Data security
</th>
<th>
1\.
</th>
<th>
Once the project data will be stored on a proper data server (e.g. UCAM’s data
server), we will make sure that these data can be easily accessed and that
they cannot be overwritten or modified by any users. We will also make sure to
have a copy of these data on physical storage devices to avoid any hacking
attack. If sensitive data will be stored, the storage will be made secure
through encryption.
</th> </tr>
<tr>
<td>
4\. Ethical aspects
</td>
<td>
1\.
</td>
<td>
Nothing to report at the moment. .
</td> </tr>
<tr>
<td>
5. FAIR Data
5.1. Making data findable, including provisions for metadata
</td>
<td>
1\.
2\.
</td>
<td>
Each dataset, that will be made available at the end of the project, will be
thoroughly described using “README” text files.
Each dataset, that will be made available at the end of the project, will be
identified using a unique DOI. The way of structuring these data will be
discussed with the project management at the end of the project.
</td> </tr>
<tr>
<td>
</td>
<td>
3\.
</td>
<td>
This will be discussed with the project management towards the end of the
project. At the moment I can only say that the name of each file should
contain the number each ESR is associated to (in my case 8).
</td> </tr>
<tr>
<td>
</td>
<td>
4\.
</td>
<td>
The name of each file should contain specific keywords in order to ease the
search of a file by a general user.
</td> </tr>
<tr>
<td>
</td>
<td>
5\.
</td>
<td>
At the end of the project, only latest (i.e. most updated) data versions will
be made available. In case more than one version is needed, then it will be
made clear the order in which the different versions have been produced.
</td> </tr>
<tr>
<td>
</td>
<td>
6\.
</td>
<td>
For each dataset, I will create specific “README” text files with the goal of
appropriately describe the dataset (variables, etc.). This in order to ease
the comprehension by the final user.
</td> </tr>
<tr>
<td>
5.2 Making data openly accessible
</td>
<td>
1\.
</td>
<td>
My supervisor and I are completely available to share the data that I will
produce during the project. Only some of these data may not be made open-
access. These will be data collected during my industrial secondment at Rolls
Royce. The decision of making these data open-access will be entirely taken by
Rolls Royce, not by me.
</td> </tr>
<tr>
<td>
</td>
<td>
2\.
</td>
<td>
The data will be made available through the personal website of my supervisor
(http://www2.eng.cam.ac.uk/~mpj1001/MJ_biography.html) and/or through the
UCAM’s data server ( _www.data.cam.ac.uk_ ) .
</td> </tr>
<tr>
<td>
</td>
<td>
3\.
</td>
<td>
At the moment, I do not foresee the need of specific softwares to access the
data, as the ones produced so far are simply text files in “.dat” format.
</td> </tr>
<tr>
<td>
</td>
<td>
4\.
</td>
<td>
See point 2.
</td> </tr>
<tr>
<td>
</td>
<td>
5\.
</td>
<td>
In case of restrictions we could create encripted data so that for example
only MAGISTER ESRs can access them. In this case, a username and a password
will be assigned to each ESR.
</td> </tr>
<tr>
<td>
5.3. Making data interoperable
</td>
<td>
1\.
</td>
<td>
During the 3-years project, data will be shared using the online toolbox
provided by GE. This toolbox has been created on purpose by GE, and only
members of the MAGISTER project will be able to access it. By creating on this
toolbox folders related to each of the 15 ESRs, it will become easy and clear
to identify which data belong to who.
</td> </tr>
<tr>
<td>
</td>
<td>
2\.
</td>
<td>
I will adopt standard vocabularies for all data types present in my datasets
in order to ease the comprehension by the final user.
</td> </tr>
<tr>
<td>
5.4. Increase data reuse (through clarifying licences)
</td>
<td>
1\.
</td>
<td>
The data produced by me will not specifically need a license as far as I know.
The data produced at Rolls Royce will instead require a license. This will be
discussed at a later stage with the company itself.
</td> </tr>
<tr>
<td>
</td>
<td>
2\.
</td>
<td>
The final data will be made available at the end of the project. Partial data
will be made available through the GE toolbox when other ESRs need them.
</td> </tr>
<tr>
<td>
</td>
<td>
3\.
</td>
<td>
As already explained, my supervisor and I are completely in favour of sharing
data. Possible restrictions may be related to the data produced during the
industrial secondment at Rolls Royce.
However, these will be a small portion.
</td> </tr>
<tr>
<td>
</td>
<td>
4\.
</td>
<td>
Usually my approach is to ask one of my colleagues to take a look at my data
to see whether they can be understood. This generally guarantees that other
users will find the description of my data clear enough.
</td> </tr>
<tr>
<td>
</td>
<td>
5\.
</td>
<td>
The length of time for which the data will remain re-usable (i.e. available
online) depends on the university in case we will use the UCAM’s data server.
If instead we use my supervisor’s website, these data can stay theoretically
there forever.
</td> </tr>
<tr>
<td>
6\. Other
</td>
<td>
</td>
<td>
</td> </tr> </table>
## ESR 9: Alireza Javareshkian (TUM)
<table>
<tr>
<th>
1\. Data summary
</th>
<th>
1\.
</th>
<th>
State the purpose of the data collection/generation The data should be
generated in accordance with the project master plan. _The already generated
data will be aggregated and sorted to extract the useful data and finally
useful data will be collected and stored._
</th> </tr>
<tr>
<td>
</td>
<td>
_2._
</td>
<td>
Explain the relation to the objectives of the project
_Through the experimental and numerical phases of the project, the data will
be generated, among which, useful data will be extracted._
</td> </tr>
<tr>
<td>
</td>
<td>
_3._
</td>
<td>
Specify the types and formats of data generated/collected _Not yet specified_
</td> </tr>
<tr>
<td>
</td>
<td>
4\.
</td>
<td>
Specify if existing data is being re-used (if any) _Not yet specified_
</td> </tr>
<tr>
<td>
</td>
<td>
5\.
</td>
<td>
Specify the origin of the data _The data will be generated via measurement
techniques through experimental phase or CFD simulations through numerical
phase of the project._
</td> </tr>
<tr>
<td>
</td>
<td>
6\.
</td>
<td>
State the expected size of the data (if known) _Not yet estimated_
</td> </tr>
<tr>
<td>
</td>
<td>
7\.
</td>
<td>
Outline the data utility: to whom will it be useful _The already collected
data, will be stored and permanent access will be allocated to the Chair of
Thermodynamics at TUM._
</td> </tr>
<tr>
<td>
2\. Allocation of resources
</td>
<td>
1\.
</td>
<td>
Estimate the costs for making your data FAIR. Describe how you intend to cover
these costs _Not yet estimated._
</td> </tr>
<tr>
<td>
</td>
<td>
2\.
</td>
<td>
Clearly identify responsibilities for data management in your project _Not yet
assigned._
</td> </tr>
<tr>
<td>
</td>
<td>
3\.
</td>
<td>
Describe costs and potential value of long term preservation _Not yet
estimated._
</td> </tr>
<tr>
<td>
3\. Data security
</td>
<td>
1\.
</td>
<td>
Address data recovery as well as secure storage and transfer of
sensitive data _The useful data will be stored on LeibnizRechenzentrum cloud
and will be routinely backed up._
</td> </tr>
<tr>
<td>
4\. Ethical aspects
</td>
<td>
1\.
</td>
<td>
To be covered in the context of the ethics review, ethics section of DoA and
ethics deliverables. Include references and related technical aspects if not
covered by the former.
</td> </tr>
<tr>
<td>
5. FAIR Data
5.1. Making data findable, including provisions for metadata
</td>
<td>
1\.
2\.
</td>
<td>
Outline the discoverability of data (metadata provision) _A manual will be
prepared in which accessing to the datasets is outlined_
Outline the identifiability of data and refer to standard identification
mechanism. Do you make use of persistent and unique identifiers such as
Digital Object Identifiers? _Not yet specified_
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
_3._
</th>
<th>
Outline which naming conventions will be used _Will be specified over the
course of time._
</th> </tr>
<tr>
<td>
</td>
<td>
4\.
</td>
<td>
Outline the approach towards search keyword _Will be specified_
</td> </tr>
<tr>
<td>
</td>
<td>
5\.
</td>
<td>
Outline the approach for clear versioning _Will be specified_
</td> </tr>
<tr>
<td>
</td>
<td>
6\.
</td>
<td>
Specify standards for metadata creation (if any). If there are no standards in
your discipline describe what type of metadata will be created and how _Not
specified yet_ .
</td> </tr>
<tr>
<td>
5.2 Making data openly accessible
</td>
<td>
1\.
</td>
<td>
Specify which data will be made openly available? _If some data is kept closed
provide rationale for doing so Not clear yet_ .
</td> </tr>
<tr>
<td>
</td>
<td>
2\.
</td>
<td>
Specify how the data will be made available _Not clear yet._
</td> </tr>
<tr>
<td>
</td>
<td>
_3._
</td>
<td>
Specify what methods or software tools are needed to access the data? Is
documentation about the software needed to access the data included? Is it
possible to include the relevant software (e.g. in open source code)? _Will be
specified upon having decided about the prior issues._
</td> </tr>
<tr>
<td>
</td>
<td>
4\.
</td>
<td>
Specify where the data and associated metadata, documentation and code are
deposited _So far, the Leibniz-Rechenzetrum is considered as the main
repository._
</td> </tr>
<tr>
<td>
</td>
<td>
5\.
</td>
<td>
Specify how access will be provided in case there are any restrictions
_Permanent access will be granted to Chair of Thermodynamics at TUM._
</td> </tr>
<tr>
<td>
5.3. Making data interoperable
</td>
<td>
1\.
</td>
<td>
Assess the interoperability of your data. Specify what data and metadata
vocabularies, standards or methodologies you will follow to facilitate
interoperability. _Will be characterized_
</td> </tr>
<tr>
<td>
</td>
<td>
2\.
</td>
<td>
Specify whether you will be using standard vocabulary for all data types
present in your data set, to allow inter-disciplinary interoperability? If
not, will you provide mapping to more commonly used ontologies? _Will be
specified_ .
</td> </tr>
<tr>
<td>
5.4. Increase data reuse (through clarifying licences)
</td>
<td>
1\.
2\.
</td>
<td>
Specify how the data will be licenced to permit the widest reuse possible _Not
specified yet._
Specify when the data will be made available for re-use. If applicable,
specify why and for what period a data embargo is needed _Not specified yet._
</td> </tr>
<tr>
<td>
</td>
<td>
_3._
</td>
<td>
Specify whether the data produced and/or used in the project is useable by
third parties, in particular after the end of the project? If the re-use of
some data is restricted, explain why _Not specified yet._
</td> </tr>
<tr>
<td>
</td>
<td>
4\.
</td>
<td>
Describe data quality assurance processes QA processes will be specified.
</td> </tr>
<tr>
<td>
</td>
<td>
5\. Specify the length of time for which the data will remain re-usable
</td> </tr>
<tr>
<td>
6\. Other
</td>
<td>
Refer to other national/funder/sectorial/departmental procedures for data
management that you are using (if any)
</td> </tr>
<tr>
<td>
</td>
<td>
</td> </tr> </table>
## ESR 10: Edmond Shehadi (UT)
Due to the stringent emission regulations and international climate
agreements, aircraft engine technology needs to improve. Of the most important
aspects considered to address this situation
aremoreefficientcombustionbymeansofhighercompressionratiosaswellasacleanercombustion
process altogether. As part of 15 early-stage researchers within the EU, the
MAGISTER project aims to address this problem through the use of machine
learning (ML). Such ML algorithms would help better autonomously adapt the
thermoacoustics from lower to higher technology readiness levels (TRL). In
turn, this will allow for a better, more efficient engine design procedure
that takes into consideration the thermoacoustic disturbances generated
therein.
The aim of this PhD project is a two-fold. First, it will further develop the
open-source CFD code SU2 1 by extending high-order discretization methods,
particularly, the discontinuous Galerkin (dG) method, in order to achieve
high-fidelity simulations in combuster liner and dilution holes. This
procedure is performed using large-eddy simulation (LES) and by implementing,
as well as comparing, different subgrid-scale (SGS) models. Second, utilize
the dG method, in conjunction with LES, on compressible turbulent flow within
the SU2 framework in order to study the acoustic behavior inside such engine
regions. This, in turn, will give rise to relatively more accurate results via
high-order numerical – both spatial and temporal – simulations.
Eventually, provided everything works out, such work will assist other
scientists and engineers to better understand how the acoustics induced by
mechanical and thermal vibrations effect the performance of jet engines.
Accordingly, such a deliverable will help better design and optimize existing
products that are subject to high acoustic instabilities and turbulence, in a
relatively more accurate manner.
<table>
<tr>
<th>
1\. Data summary
</th>
<th>
Simulations about a confined geometry involving compressible turbulent flow
are done and the resulting statistics are gathered for post-processing. Such
simulations assist in validating the open-source dG solver and provide for a
better, more accurate acoustical representation in combustor liner and
dilution holes.
Existing data from such simulations might be re-used for reproducibility
and/or additional correlations that give better insight about the overall
performance of the numerical procedure. All data are generated from the SU2
open-source code and use the .su2 format.
The amount of data for the simulations can easily be 100s of Gb, given the
many simulations that are carried out. However, the instantaneous solution is
deleted, due to its size, after extracting the necessary temporally- and/or
spatially-averaged statistical quantities. For a large number of files, these
averaged data are saved in CGNS format.
All entire simulations mentioned will utilize the dG solver in SU2, which is
being developed to tackle such flows. Hence, part of the data is also involved
in contributing to the SU2 framework in .cpp and .h format and on its main
Github 2 repository. Thus, the output of the data is a twofold: extend the
open-source SU2 code such that it is able to simulate such
</th> </tr> </table>
1. _https://su2code.github.io/_
2. _https://github.com/_
<table>
<tr>
<th>
</th>
<th>
physics and eventually both validate and study the acoustics in jet engines
using such a tool. Such results are useful for researchers, both at academia
and industry, interested in better understanding how acoustics from turbulence
are generated and to what role they play, using relatively lesser
computational resources than the current state-of-the-art.
</th> </tr>
<tr>
<td>
2\. Allocation of resources
</td>
<td>
The extension of the SU2 code is done on its Github repository. This allows
anyone interested in using such a tool, or further improving on it, to do so
free of charge – hence open-source.
The data generated using such a code will include the required statistics
only. In the likelihood of the need to repeat certain simulations, the mesh
file as well as the input .cfg files used in SU2 will be provided. These files
may be stored at the 4TU data center, provided it has a data seal of approval.
Costs for these statistics is negligible, since they reduce to a few gigabytes
worth of information. Up until this point, no need for long-term preservation
of any data is discussed.
</td> </tr>
<tr>
<td>
3\. Data security
</td>
<td>
The data are collected from the cluster at the University of Twente (EFD
department) and are stored on the local workstation of the host. Backups are
done in a regular manner using external (portable) hard drives.
</td> </tr>
<tr>
<td>
4\. Ethical aspects
</td>
<td>
To be covered in the context of the ethics review, ethics section of DoA and
ethics deliverables. Include references and related technical aspects if not
covered by the former.
</td> </tr>
<tr>
<td>
5. FAIR Data
5.1. Making data findable, including provisions for metadata
</td>
<td>
The data used in the SU2 code is available on the public Github repository. As
for the simulation data, the MAGISTER consortium has to approve which is
suitable for public exposure.
The naming convention used is based on the following model:
physics_Model_Time_SimNumber, whereby “physics” refers to the actual physics
(e.g. acoustics), “model” is the turbulence model used in LES,
“Time” denotes the fine time-step used in the simulation and “SimNumber” is a
lexicographic numbering applied on different simulations.
Some of the search keywords are: turbulence, LES, dG, computational acoustics,
non-reflective boundary conditions.
</td> </tr>
<tr>
<td>
5.2 Making data openly accessible
</td>
<td>
The code, by definition of open-source, is publicly accessible for use and
modifications. The data obtained from using such a code in different
simulations need to be agreed upon by the MAGISTER project consortium before
sharing outside of the involved partners.
</td> </tr>
<tr>
<td>
5.3. Making data interoperable
</td>
<td>
When contributing to the SU2 code, the main developers’
recommendations are heeded; this includes variable naming conventions, certain
programming styles, separation of source and header files, source code
compliance with the C++ ISO/ANSI standard… etc.
</td> </tr>
<tr>
<td>
5.4. Increase data reuse (through clarifying licences)
</td>
<td>
License for use of SU2 is given under the GNU Lesser General Public License,
version 2.1.
</td> </tr>
<tr>
<td>
6\. Other
</td>
<td>
\--
</td> </tr> </table>
## ESR 11: Thomas Christou (KIT)
<table>
<tr>
<th>
1\. Data summary
</th>
<th>
1\.
</th>
<th>
The data includes the local velocities in two directions and the local droplet
size distribution. With the expected data, the influence of oscillating flow
on spray quality shall be investigated and quantified.
</th> </tr>
<tr>
<td>
</td>
<td>
2\.
</td>
<td>
The time-dependent data will be analysed in frequency space in order to
investigate the impact of flow oscillation on the droplet size distribution.
</td> </tr>
<tr>
<td>
</td>
<td>
3\.
</td>
<td>
Text format, pictures
</td> </tr>
<tr>
<td>
</td>
<td>
4\.
</td>
<td>
No existing data is being re-used
</td> </tr>
<tr>
<td>
</td>
<td>
5\.
</td>
<td>
Phase-Doppler-Anemometry measurements, camera
</td> </tr>
<tr>
<td>
</td>
<td>
6\.
</td>
<td>
Expected size of the data: 10GB
</td> </tr>
<tr>
<td>
</td>
<td>
7\.
</td>
<td>
The data will be used within the project in WP4.
</td> </tr>
<tr>
<td>
2\. Allocation of resources
</td>
<td>
1\.
</td>
<td>
The data will be allocated at servers of the KIT. The preservation of data has
usually a life-time of 10 years, however a prolongation can be requested. The
preservation costs are paid by a yearly budget of the KIT.
</td> </tr>
<tr>
<td>
3\. Data security
</td>
<td>
1\.
</td>
<td>
The KIT is subject to the legal data protection regulations of the European
Union (General Data Protection Regulation, GDPR) and the Landesdatenschutz of
Baden-Württemberg (from 20.06.2018).
</td> </tr>
<tr>
<td>
</td>
<td>
2\.
</td>
<td>
The backup of the project-related data and its long-term use is ensured by a
subject-specific folder and filing system as well as by securing the raw and
result data. The data archiving takes place via a mass memory and a tape
drive. These are secured by emergency power supply and CO 2 fire
extinguishing. Hard disks are
automatically mirrored. There is an access security. For the transfer of
sensitive data, the state Baden-Württemberg of Germany provides a secured
share platform (bwSync&Share) for their universities.
</td> </tr>
<tr>
<td>
4\. Ethical aspects
</td>
<td>
1\.
</td>
<td>
No ethical violation are to be expected with the targeted data and the
produced data will be in accordance to the ethics section of the DoA.
</td> </tr>
<tr>
<td>
5. FAIR Data
5.1. Making data findable, including provisions for metadata
</td>
<td>
1\.
</td>
<td>
The FAIR data management plan will be specified in a later stage in
coordination with the partners.
</td> </tr>
<tr>
<td>
5.2 Making data openly accessible
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
5.3. Making data interoperable
</td>
<td>
</td> </tr>
<tr>
<td>
5.4. Increase data reuse (through clarifying licences)
</td>
<td>
</td> </tr>
<tr>
<td>
6\. Other
</td>
<td>
bwSync&Share
</td> </tr>
<tr>
<td>
</td>
<td>
</td> </tr> </table>
## ESR 12: Sara Navarro Arredondo (UT)
<table>
<tr>
<th>
1\. Data summary
</th>
<th>
1\.
</th>
<th>
State the purpose of the data collection/generation
_The purpose of the data generated is to make a characterization of
acoustically forced and unforced kerosene spray flames at elevated pressure
and preheated air, to generate a base data that can contribute to the machine
learning generation._
</th> </tr>
<tr>
<td>
</td>
<td>
2\.
</td>
<td>
Explain the relation to the objectives of the project
_To collect experimental data of the acoustics of the combustion of aircraft
fuels and to compared it with modelling data._
</td> </tr>
<tr>
<td>
</td>
<td>
3\.
</td>
<td>
Specify the types and formats of data generated/collected
_Model parameter values (.cvs file), model imput data (.csv file), model
scripts (.mgf, .csv), model output (.csv, .text file), manuscripts (word),
others (adobe, word, excel)_
</td> </tr>
<tr>
<td>
</td>
<td>
4\.
</td>
<td>
Specify if existing data is being re-used (if any)
_Operating parameters from previous research work related to the DESIRE
combustor at CWT UT._
</td> </tr>
<tr>
<td>
</td>
<td>
5\.
</td>
<td>
Specify the origin of the data
_Research PhD projects of Jaap van Kampen and Mehmet Kapucu, at the CWT UT._
</td> </tr>
<tr>
<td>
</td>
<td>
6\.
</td>
<td>
State the expected size of the data (if known)
_About 20 GB of research data (gross estimation)_
</td> </tr>
<tr>
<td>
</td>
<td>
7\.
</td>
<td>
Outline the data utility: to whom will it be useful
_The data generated will be useful within the other member of the_
_MAGISTER project, particularly with ESR11 and ESR06_
</td> </tr>
<tr>
<td>
2\. Allocation of resources
</td>
<td>
1\.
</td>
<td>
Estimate the costs for making your data FAIR. Describe how you intend to cover
these costs
_TBD_
</td> </tr>
<tr>
<td>
</td>
<td>
2\.
</td>
<td>
Clearly identify responsibilities for data management in your project
_I am responsible for collecting and managing the data used in my research, in
consultation with my supervisor_
</td> </tr>
<tr>
<td>
</td>
<td>
3\.
</td>
<td>
Describe costs and potential value of long term preservation
_TBD_
</td> </tr>
<tr>
<td>
3\. Data security
</td>
<td>
1\.
</td>
<td>
Address data recovery as well as secure storage and transfer of sensitive data
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
_At the end of the project, data will be made publically available under the
restricted ruled by UT and the supervisor will receive a full copy of the
data._
</th> </tr>
<tr>
<td>
4\. Ethical aspects
</td>
<td>
1\.
</td>
<td>
To be covered in the context of the ethics review, ethics section of DoA and
ethics deliverables. Include references and related technical aspects if not
covered by the former.
_No additional points._
</td> </tr>
<tr>
<td>
5. FAIR Data
5.1. Making data findable, including provisions for metadata
</td>
<td>
1\.
</td>
<td>
Outline the discoverability of data (metadata provision)
_For all input data, one document containing all metadata will be created,
specifying at least source, time period, measurement method, type of data,
unit of measurement, access rights, date downloaded. All data will be checked
for consistency, and any changes made to input data will be documented._
</td> </tr>
<tr>
<td>
</td>
<td>
2\.
</td>
<td>
Outline the identifiability of data and refer to standard identification
mechanism. Do you make use of persistent and unique identifiers such as
Digital Object Identifiers?
_Yes._
</td> </tr>
<tr>
<td>
</td>
<td>
3\.
</td>
<td>
Outline which naming conventions will be used _To be specified later._
</td> </tr>
<tr>
<td>
</td>
<td>
4\.
</td>
<td>
Outline the approach towards search keyword
_Combustion, atomization, fuel atomization, kerosene combustion, acoustic
oscillation, acoustic instabilities in combustion, machine learning_
</td> </tr>
<tr>
<td>
</td>
<td>
5\.
</td>
<td>
Outline the approach for clear versioning
_Going for articles published in recognized and specialized magazines_
</td> </tr>
<tr>
<td>
</td>
<td>
6\.
</td>
<td>
Specify standards for metadata creation (if any). If there are no standards in
your discipline describe what type of metadata will be created and how
_By specifying at least source, time period, measurement method, type of data,
unit of measurement, access rights, date downloaded. All data will be checked
for consistency, and any changes made to input data will be documented._
</td> </tr>
<tr>
<td>
5.2 Making data openly accessible
</td>
<td>
1\.
</td>
<td>
Specify which data will be made openly available? If some data is kept closed
provide rationale for doing so
_Data obtained is expect to be accessible to others that may use it to further
experimentations and it is intended to make these publically available._
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
2\.
</th>
<th>
Specify how the data will be made available
_By publishing results and methodology in divulgation journals and at the UT
catalogue._
</th> </tr>
<tr>
<td>
</td>
<td>
3\.
</td>
<td>
Specify what methods or software tools are needed to access the data? Is
documentation about the software needed to access the data included? Is it
possible to include the relevant software (e.g. in open source code)?
_TBD_
</td> </tr>
<tr>
<td>
</td>
<td>
4\.
</td>
<td>
Specify where the data and associated metadata, documentation and code are
deposited
_The data will be storage in the local hard disk of my laptop._
</td> </tr>
<tr>
<td>
</td>
<td>
5\.
</td>
<td>
Specify how access will be provided in case there are any restrictions
_TBD_
</td> </tr>
<tr>
<td>
5.3. Making data interoperable
</td>
<td>
1\.
</td>
<td>
Assess the interoperability of your data. Specify what data and metadata
vocabularies, standards or methodologies you will follow to facilitate
interoperability.
_A methodology will be delivered to make easy access of the data obtained._
</td> </tr>
<tr>
<td>
</td>
<td>
2\.
</td>
<td>
Specify whether you will be using standard vocabulary for all data types
present in your data set, to allow inter-disciplinary interoperability? If
not, will you provide mapping to more commonly used ontologies?
_Standard vocabulary will be mainly used. If necessary a mapping of the terms
will be delivered._
</td> </tr>
<tr>
<td>
5.4. Increase data reuse (through clarifying licences)
</td>
<td>
1\.
</td>
<td>
Specify how the data will be licenced to permit the widest reuse possible
_TBD_
</td> </tr>
<tr>
<td>
</td>
<td>
2\.
</td>
<td>
Specify when the data will be made available for re-use. If applicable,
specify why and for what period a data embargo is needed _TBD_
</td> </tr>
<tr>
<td>
</td>
<td>
3\.
</td>
<td>
Specify whether the data produced and/or used in the project is useable by
third parties, in particular after the end of the project? If the re-use of
some data is restricted, explain why
_TBD_
</td> </tr>
<tr>
<td>
</td>
<td>
4\.
</td>
<td>
Describe data quality assurance processes
</td> </tr>
<tr>
<td>
</td>
<td>
_TBD_
5\. Specify the length of time for which the data will remain re-usable _TBD_
</td> </tr>
<tr>
<td>
6\. Other
</td>
<td>
Refer to other national/funder/sectorial/departmental procedures for data
management that you are using (if any)
_Not applicable_
</td> </tr>
<tr>
<td>
</td>
<td>
</td> </tr> </table>
## ESR 13: Michael McCartney (GEDE)
<table>
<tr>
<th>
1\. Data summary
</th>
<th>
1\.
</th>
<th>
Data will be generated to describe the ability of ML algorithms to extrapolate
data sets and predict onset of combustion instabilities.
</th> </tr>
<tr>
<td>
</td>
<td>
2\.
</td>
<td>
Data generated will be stored in .csv format
</td> </tr>
<tr>
<td>
</td>
<td>
3\.
</td>
<td>
Existing data from GE combustor tests will be used
</td> </tr>
<tr>
<td>
</td>
<td>
4\.
</td>
<td>
Size of data generated is not currently known
</td> </tr>
<tr>
<td>
</td>
<td>
5\.
</td>
<td>
Conclusions and summary statistics of data generated will be useful to GEDE
and the combustion community. The data itself will only be of use to GEDE.
</td> </tr>
<tr>
<td>
2\. Allocation of resources
</td>
<td>
1\.
</td>
<td>
Data generated will be confidential to GEDE and so costs associated with long
term preservation will be carried by them.
</td> </tr>
<tr>
<td>
</td>
<td>
2\.
</td>
<td>
Data management will be carried out by ESR13 in line with GEDE data management
practices.
</td> </tr>
<tr>
<td>
3\. Data security
</td>
<td>
1\.
</td>
<td>
Data storage and transfer will be managed in line with GEDE data management
practices and will make use of the GE Box cloud solution and GE Github that
provide sercure storage and recovery.
</td> </tr>
<tr>
<td>
4\. Ethical aspects
</td>
<td>
1\.
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
5. FAIR Data
5.1. Making data findable, including provisions for metadata
</td>
<td>
1\.
</td>
<td>
Metadata and DOIs will be generated in line with general guidelines in the
DMP.
</td> </tr>
<tr>
<td>
5.2 Making data openly accessible
</td>
<td>
1\.
</td>
<td>
Conclusions and summary statistics of data generated will be made openly
available. The data itself will be confidential to GEDE and so will not be
shared.
</td> </tr>
<tr>
<td>
</td>
<td>
2\.
</td>
<td>
Relevant data will be made available through journal articles and in the
trusted repository defined in the DMP.
</td> </tr>
<tr>
<td>
5.3. Making data interoperable
</td>
<td>
1\.
</td>
<td>
Metadata and DOIs will be generated in line with general guidelines in the DMP
</td> </tr>
<tr>
<td>
5.4. Increase data reuse (through clarifying licences)
</td>
<td>
1\.
</td>
<td>
Conclusions and summary statistics of data generated will be made openly
available. The data itself will be confidential to GEDE and so will not be
shared.
</td> </tr>
<tr>
<td>
6\. Other
</td>
<td>
Internal GE processes on data management, IP protection and confidentiality
still apply.
</td> </tr> </table>
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1311_SCORES_766464.md
|
# Executive summary
This is a deliverable D9.3 Data Management Plan of the SCORES project. It
constitutes a public document, delivered in the context of WP9 Dissemination
and exploitation of results, Task 9.1 Dissemination and Communication. The
objective of Task 9.1 is to ensure relevant Project’s information
transferability.
This document presents the first release of Data Management Plan in the
framework of the SCORES project. The main purpose of this Deliverable is to
provide the plan for managing the data generated and collected during the
Project. Specifically, the Data Management Plan describes the data management
life cycle for all datasets to be collected, processed and/or generated by the
project. It covers:
* Identification of the results that should be subject of the SCORES dissemination and exploitation
* Analysis of the main data uses and users
* Exploration of the restrictions related to Intellectual Property Rights in accordance with the Consortium Agreement
* Definition of the data assurance processes that are to be applied during and after the completion of the Project
In addition, the Data Management Plan specifies whether data will be
shared/made open and how and what methodology and standards will be applied.
This document is prepared in compliance with the template provided by the
Commission in the Annex 1 of the Guidelines on Data Management in Horizon
2020.
# Introduction
This document constitutes the first issue of Data Management Plan (DMP) in the
EU framework of the SCORES project under Grant Agreement No. 766464. The
objective of the DMP is to establish the measures for promoting the findings
during the Project’s life and detail what data the Project will generate,
whether and how it will be exploited or made accessible for verification and
re-use, and how it will be curated and preserved. The DMP enhances and ensures
relevant Project´s information transferability and takes into account the
restrictions established by the Consortium Agreement. In this framework, the
DMP sets the basis for both Dissemination Plan and Exploitation Plan. The
first version of the DMP is delivered at month 6, later the DMP will be
monitored and updated in parallel with the different versions of Dissemination
and Exploitation Plans. It is acknowledged that not all data types will be
available at the start of the Project, thus whenever important, if any changes
occur to the SCORES project due to inclusion of new data sets, changes in
consortium policies or external factors, the DMP will be updated in order to
reflect the actual data generated and the user requirements as identified by
the SCORES consortium participants.
The SCORES project aims to combine and optimize the multi-energy generation,
storage and consumption of local renewable energy and grid supply, bringing
new sources of flexibility to the grid, and enabling reliable operation with a
positive business case in Europe’s building stock. SCORES optimizes self-
consumption of renewable energy and defers investments in the energy grid.
The overall goal of the SCORES project is to demonstrate in the field the
integration, optimization and operation of a building energy system including
new compact hybrid storage technologies, that optimizes supply, storage and
demand of electricity and heat in residential buildings and that increases
self-consumption of local renewable energy in residential buildings at the
lowest cost.
SCORES project comprises six technical work packages as follows:
* WP3 Enhancement of energy conversion technology
* WP4 Development of electrical storage system using second-life Li-ion battery
* WP5 Optimization of heat storage technology based on Chemical Looping Combustion (CLC)
* WP6 Energy management system and (electrical) system integration
* WP7 Demonstration of the integrated energy system including the innovative technologies in an existing multifamily building connected to a district heating grid
* WP8 Demonstration of the integrated energy system including the innovative technologies in an existing multifamily building with electric space heating
Three non-technical work packages ensure the facilitation of the technical
work and coordination of all the work packages, dissemination and
communication of the project results. These work packages consist of the
following:
* WP1 Project Management
* WP2 Modelling and evaluation of the system added value and business opportunities
* WP9 Dissemination and exploitation of results
This document has been prepared to describe the data management life cycle for
all data sets that will be collected, processed or generated by the SCORES
project. It is a document outlining how research data will be handled during
the Project, and after the Project is completed. It describes what data will
be collected, processed or generated and what methodologies and standards are
to be applied. It also defines if and how this data will be shared and/or made
open, and how it will be curated and preserved.
# Open Access
Open access can be defined as the practice of providing online access to
scientific information that is free of charge to the reader and that is
reusable. In the context of R&D, open access typically focuses on access to
“scientific information”, which refers to two main categories:
* Peer-reviewed scientific research articles (published in academic journals), or
* Scientific research data (data underlying publications and/or raw data).
It is important to note that:
* Open access publications go through the same peer review process as non-open access publications.
* As an open access requirement comes after a decision to publish, it is not an obligation to publish; it is up to researchers whether they want to publish some results or not.
* As the decision on whether to commercially exploit results (e.g. through patents or otherwise) is made before the decision to publish (open access or not), open access does not interfere with the commercial exploitation of research results. 1
Benefits of open access:
* Unprecedented possibilities for the dissemination and exchange of information due to the advent of the internet and electronic publishing.
* Wider access to scientific publications and data including creation and dissemination of knowledge, acceleration of innovation, foster collaboration and reduction of the effort duplication, involvement of citizens and society, contribution to returns on investment in R&D etc.
Possibiliti
es to
acces
s
and share
scientific information
Faster growth
Foster collaboration
Involve citizens and
society
Build on previous
research results
OPEN AC
C
ESS
Accelerated
innovation
Increased
efficiency
Improved quality
of results
Improved
transpare
ncy
The EC capitalizes on open access and open science as it lowers barriers to
accessing publicly-funded research. This increases research impact, the free-
flow of ideas and facilitates a knowledge-driven society at the same time
underpinning the EU Digital Agenda (OpenAIRE Guide for Research Administrators
- EC funded projects). Open access policy of European Commission is not a goal
in itself, but an element in promotion of affordable and easy accessible
scientific information for the scientific community itself, but also for
innovative small businesses.
## _**Open Access to peer-reviewed scientific publications** _
Open access to scientific peer-reviewed publications (also known as Open
Access Mandate) has been anchored as an underlying principle in the Horizon
2020 Regulation and the Rules of Participation and is consequently implemented
through the relevant provisions in the Grant Agreement. Non-compliance can
lead, amongst other measures, to a grant reduction.
More specifically, Article 29 of the SCORES GA: “Dissemination of results,
Open Access, Visibility of EU Funding” establishes the obligation to ensure
open access to all peerreviewed articles relating to the SCORES project.
### _Article 29.2 SCORES GA: Open access to scientific publications_
“Each beneficiary must ensure open access (free of charge online access for
any user) to all peer reviewed scientific publications relating to its
results.
In particular, it must:
1. as soon as possible and at the latest on publication, deposit a machine-readable electronic copy of the published version or final peer-reviewed manuscript accepted for publication in a repository for scientific publications;
Moreover, the beneficiary must aim to deposit at the same time the research
data needed to validate the results presented in the deposited scientific
publications.
2. ensure open access to the deposited publication — via the repository — at the latest:
1. on publication, if an electronic version is available for free via the publisher, or
2. within six months of publication (twelve months for publications in the social sciences and humanities) in any other case.
3. ensure open access — via the repository — to the bibliographic metadata that identify the deposited publication.
The bibliographic metadata must be in a standard format and must include all
of the following:
* the terms “European Union (EU)” and “Horizon 2020”;
* the name of the action, acronym and grant number;
* the publication date, and length of embargo period if applicable; \- a persistent identifier.”
### Green Open Access
The green open access is also called self-archiving and means that the
published article or the final peer-reviewed manuscript is archived by the
researcher in an online repository before, after or alongside its publication.
Access to this article is often delayed (embargo period). Publishers recoup
their investment by selling subscriptions and charging pay-perdownload/view
fees during this period during an exclusivity period. This model is promoted
alongside the “Gold” route by the open access community of researchers and
librarians, and is often preferred.
### Gold Open Access
The gold open access is also called open access publishing, or author pays
publishing, and means that a publication is immediately provided in open
access mode by the scientific publisher. Associate costs are shifted from
readers to the university or research institute to which the researcher is
affiliated, or to the funding agency supporting the research. This model is
usually the one promoted by the community of well-established scientific
publishers in the business.
## _**Open Access to research data** _
“Research data” refers to information, in particular facts or numbers,
collected to be examined and considered and as a basis for reasoning,
discussion, or calculation. In a research context, examples of data include
statistics, results of experiments, measurements, observations resulting from
fieldwork, survey results, interview recordings and images. The focus is on
research data that is available in digital form.
### _Article 29.3 SCORES GA: Open access to research data_
“Regarding the digital research data generated in the action (‘data’), the
beneficiaries must:
1. deposit in a research data repository and take measures to make it possible for third parties to access, mine, exploit, reproduce and disseminate — free of charge for any user — the following:
1. the data, including associated metadata, needed to validate the results presented in scientific publications as soon as possible;
2. other data, including associated metadata, as specified and within the deadlines laid down in the 'data management plan' (see Annex 1 of the
SCORES GA);
2. provide information — via the repository — about tools and instruments at the disposal of the beneficiaries and necessary for validating the results (and — where possible — provide the tools and instruments themselves).
This does not change the obligation to protect results in Article 27, the
confidentiality obligations in Article 36, the security obligations in Article
37 or the obligations to protect personal data in Article 39, all of which
still apply.
The beneficiaries do not have to ensure open access to specific parts of their
research data if the achievement of the action's main objective, as described
in Annex 1, would be jeopardized by making those specific parts of the
research data openly accessible. In this case, the data management plan must
contain the reasons for not giving access.”
## _**Dissemination & Communication and Open Access ** _
For the implementation of the SCORES project, there is a complete
dissemination and communication set of activities scheduled, with the
objectives of raising awareness in the research community, industry and wide
public (e-newsletters, e-brochures, posters or events, are foreseen for the
dissemination of the SCORES to key groups potentially related to the project
results’ exploitation). Likewise, the SCORES website, webinars, press releases
or videos, for instance, will be developed for a communication to a wider
audience. Details about all those dissemination and communication elements are
provided in the deliverable D9.2 Communication and Dissemination Plan. The
Data Management Plan and the actions derived are part of the overall SCORES
dissemination and communication strategy, which is included in the above
mentioned D9.2.
# Objectives of Data Management Plan
The purpose of the SCORES Data Management Plan is to provide a management
assurance framework and processes that fulfil the data management policy that
will be used by the SCORES project partners regarding all the dataset types
that will be generated by the SCORES project. The aim of the DMP is to control
and ensure quality of project activities, and to manage the material/data
generated within the SCORES project effectively and efficiently. It also
describes how data will be collected, processed, stored and managed
holistically from the perspective of external accessibility and long-term
archiving.
The content of the DMP is complementary to other official documents that
define obligations under the Grant Agreement and associated annexes, and shall
be considered a living document and as such will be the subject of periodic
updating as necessary throughout the lifespan of the Project.
**SCORES**
**Data**
**Management**
**Plan**
Communication
and
Dissemination
Plan
Exploitation Plan
Publication
Repository of
research data
IPR manual
Business models
Open access to
scientific
publication
Open access to
research data
IPR strategy
Business Plan
**Figure 2: SCORES Data Management Plan overview**
# SCORES Project Website and Sharepoint - storage and access
## _**SCORES Project Website** _
The SCORES project website is used for storing only public documents related
to the Project and dissemination. The website has been set up under the
address www.scores-project.eu and has been launched in January 2018\. The
SCORES website is meant to be functioning for the whole Project duration and
minimum 2 years after the Project end. The website presents the first step in
the partial objective of developing and deploying an awareness and
dissemination plan.
Design of the website has been done by dissemination leader FENIX that is also
in charge of website maintenance and regular update. As the Project website is
not intended to be static, the news and events as well as the rest of the
content will be once a month updated and managed throughout the duration of
the Project based on the partner’s inputs and Project evolution. Due to the
expected impact on different audience all around the world, it was designed to
provide complete and technical information in a way that is accessible by a
wide range of stakeholders. The website is available in English, but
translation to partners´ languages is considered as well in order to break the
language barrier and enable wide and effective communication of Project
results at national level.
The site itself has only the public section, which is accessible to everyone
and contains public deliverables, promo materials, presentations, newsletters,
publications, papers and others.
To ensure the safety of the data, the partners will use their available local
file servers to periodically create backups of the relevant materials. The
SCORES project website itself already has its own backup procedures.
The Project Coordinator (TNO) of the SCORES along with the Dissemination and
Exploitation Manager (FENIX) will be in charge for data management and all the
relevant issues.
**Figure 3: SCORES project website**
## _**SCORES Project Sharepoint** _
The SCORES Sharepoint site is the baseline for document sharing within the
framework of the SCORES project. Sharepoint is used for document sharing,
document configuration control, action item handling and contact information
list. The Project Coordinator is the administrator of the SCORES Sharepoint
site. The administrator (TNO) is responsible for the Sharepoint maintenance
and for adding or removing users. The users permissions are also handled by
the administrator (TNO). In order to ensure the confidentiality of documents
in accordance with the Grant and Consortium Agreement a Project Security
Instruction has been established.
The Sharepoint site contains picture library and several document libraries.
Document libraries consist of:
* Management, Contracts, Finances
* Actions, Documents, User Instructions, Contacts
* Templates and Dissemination material
* Dropbox
* Archive
* Papers
* Design Folder
Documents with a formal status (e.g. deliverable, important plans, procedures)
have to be reviewed and approved by an expert colleague and General Assembly.
The final status of these documents can become “Authorized” and documents are
formalized by signature on the cover page. After the authorization of the
document by the General Assembly, document is moved to the Archive by the
Sharepoint administrator (TNO).
Grant Agreement and Consortium Agreement set out rules for data handling and
management. Distribution within a file share environment shall be limited to
active participants from the consortium partners on a need-to-know basis.
Confidential information shared through the SCORES Sharepoint site may not be
distributed outside the consortium. All Project partners shall take
precautions to securely store data connected to the Project when downloading
and locally storing files from the Sharepoint site. For this purpose all TNO
laptops are storage encrypted by Bitlocker and the similar measures shall be
taken by other Project partners.
# Data management plan implementation
The organisational structure of the SCORES project matches the complexity of
the Project and is in accordance with the recommended management structure of
the DESCA model Consortium Agreement. The organisational structure of the
Project is shown in the figure below.
project**
The general and technical management of the Project is handled by the
**Coordinator** of the Project (TNO). Experts within the TNO organisation
provide the Project Manager (PM) at TNO with administrative, financial and
legal support. TNO has a vast experience in the administration and management
of national and international collaborative projects.
Responsibilities of the Coordinator include:
* The overall management of the Project, including administrative tasks and all contacts with the EC and the Project Officer
* Coordinating all technical activities, including progress reporting,
* Organising and chairing the meetings of the General Assembly and Executive Board managing bodies
* Assisting in coordination of the dissemination and exploitation activities
* Representing the Project in public exposure and media contacts
The R&D work in the Project is divided in six technical work packages and
three nontechnical work packages. Each work package is managed by **Work
Package Leader** (WP Leader). WP Leaders are responsible for managing their
work package as a self-contained entity.
Tasks and responsibilities of the WP Leaders include, among others, following:
* Coordination of the technical work in the WPs, including contribution to reporting
* Assessment of the WP progress to ensure the output performance, costs and timeline are met
* Identification of IPR issues and opportunities
* Organisation of the WP meetings
* Contribution to the dissemination activities
* Initiation of all actions necessary for reaching a solution or decision in consultation with the researchers involved and the PMs
In the case of technical problems at WP level, the WP Leader should be
notified as soon as possible.
In addition, each WP is further subdivided into its large components tasks,
which are allocated to a **Task Leader** responsible for their coordination.
In the organisation structure, two management bodies are identified:
* **General Assembly (GA):** consists of one representative of each partner, chaired by the representative of the Coordinator. The task of the GA is to supervise the Project and to take decisions in major issues that may affect the wide implementation and strategy of the entire Project like changes of work plan, change of Project Manager or WP Leader, budget relocations, IPR issues, entrance/leave of partners and other non-technical matters of general importance.
* **Executive Board (EB):** consists of all WP Leaders, chaired by the representative of the Coordinator. The EB monitors the technical progress, approves progress reports and deliverables, assesses milestones, deals with technical problems that concern two or more WPs, prepares issues that should be decided by the General Assembly and coordinates meetings and conference visits.
The GA is supported by the **Expert Advisory Board (EAB)** consisting of the
number of external experts that will be selected on the basis of their
profound and long-lasting expertise in the field of research, innovation and
industrialisation.
Partners of the SCORES project demonstrate relevant management capabilities
necessary to support and provide major contribution to all the activities
envisaged in the Project work. All partners and their roles in the SCORES
project are listed in the following table.
## **Table 1: SCORES partners and their role in the project**
<table>
<tr>
<th>
**#**
</th>
<th>
**Partner short name**
</th>
<th>
**Partner legal name**
</th>
<th>
**Partner role in the SCORES project**
</th> </tr>
<tr>
<td>
**1.**
</td>
<td>
**TNO**
</td>
<td>
Nederlandse organisatie voor
Toegepast
Natuurwetenschappelijk onderzoek
</td>
<td>
Project Coordinator, contributing to optimization and further development of
the CLC thermal energy storage technology, definition of controls and
algorithms of the hybrid system and system engineering of the demo cases.
</td> </tr>
<tr>
<td>
**2.**
</td>
<td>
**AEE**
</td>
<td>
AEE – Institute for sustainable technologies
(AEE INTEC)
</td>
<td>
Leader of demonstration of the SCORES system in Northern Europe, contributing
to the definition, design, enhancement and validation of the heat battery
subsystem and its components, the definition, design and implementation of the
overall SCORES system and performing the system simulations of the SCORES
system.
</td> </tr>
<tr>
<td>
**3.**
</td>
<td>
**EDF**
</td>
<td>
Electricité de France SA
</td>
<td>
Leader of demonstration of the integrated energy system including the
innovative technologies in an existing multifamily building with electric
space heating situated in Southern Europe. Responsible for technical,
economical and environmental evaluation of the system. Involvement in battery
testing, BEMS development, selfconsumption of PV in buildings.
</td> </tr>
<tr>
<td>
**4.**
</td>
<td>
**RINA-C**
</td>
<td>
RINA Consulting S.p.A.
</td>
<td>
Leader of Modelling and evaluation of the system added value and business
opportunities, being specifically responsible for the preparation of the
business models, technology roadmap for the future upscaling of the system as
well as for the definition of the standardization measure.
</td> </tr>
<tr>
<td>
**5.**
</td>
<td>
**FENIX**
</td>
<td>
FENIX TNT s.r.o.
</td>
<td>
Dissemination and exploitation leader, development of business modelling and
business plans, IPR management, market assessment, data management.
</td> </tr>
<tr>
<td>
**6.**
</td>
<td>
**KMG**
</td>
<td>
König Metall GmbH & Co. KG
</td>
<td>
Responsible for the building of the CLC storage subsystem, working on its
technical and economical manufacturing.
</td> </tr>
<tr>
<td>
**7.**
</td>
<td>
**IPS**
</td>
<td>
Instituto Politécnico de Setúbal
</td>
<td>
Responsible for the enhancement of energy conversion technology with the focus
on benchmarking different existing PCMs and selection of the appropriate one,
CFD simulation to optimize the PCM integration with the other system
components and optimisation of the DHW subsystem with
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
PV/T collectors and coupling with the heat battery.
</td> </tr>
<tr>
<td>
**8.**
</td>
<td>
**FOP**
</td>
<td>
Forsee Power
</td>
<td>
Responsible for selection of the EV battery source and design and building of
2 battery cabinets with a new battery controller for the 2 demonstrators
</td> </tr>
<tr>
<td>
**9.**
</td>
<td>
**HEL**
</td>
<td>
Heliopac SAS
</td>
<td>
Leader of the installation, commissioning and decommissioning of the northern
and southern Europe demo sites. Responsible for the enhancement of the
coupling of long term storage with domestic hot water produced by the
combination of water to water heat pumps and PV/T collectors for the southern
demo site.
</td> </tr>
<tr>
<td>
**10.**
</td>
<td>
**CAM**
</td>
<td>
Campa
</td>
<td>
Responsible for the development of air heat pumps with PCM storage energy
system for space heating and Electro-Thermal storage units for ambient air
comfort development.
</td> </tr>
<tr>
<td>
**11.**
</td>
<td>
**SIE**
</td>
<td>
Siemens Nederland N.V.
</td>
<td>
Leader of the designing, engineering and installation of an
integrated Building Energy Management System that will optimize the
selfconsumption, self-generation and the flexibility of the building by
monitoring and controlling the various developed energy related technologies
and by optimizing the balance between supply and demand of electricity and
heat. Involvement in the engineering and production of the convertor cabinets.
</td> </tr>
<tr>
<td>
**12.**
</td>
<td>
**SAL**
</td>
<td>
Salzburg AG
</td>
<td>
Contributing to set up of the system requirements for the demonstration case
by giving input w.r.t the demo building and the district heating grid
connection, performance of the system integration, installation and
commissioning, troubleshoot during integration and installation and conduction
of the decommissioning process for the demo system.
</td> </tr> </table>
# Research data
“Research data” refers to information, in particular facts or numbers,
collected to be examined and considered as a basis for reasoning, discussion,
or calculation. In a research context, examples of data include statistics,
results of experiments, measurements, observations resulting from fieldwork,
survey results, interview recordings and images. The focus is on research data
that is available in digital form.
As indicated in the Guidelines on Data Management in Horizon 2020 (European
Commission, Research & Innovation, October 2015), scientific research data
should be easily:
## • DISCOVERABLE
The data and associated software produced and/or used in the project should be
discoverable (and readily located), identifiable by means of a standard
identification mechanism (e.g. Digital Object Identifier).
## • ACCESSIBLE
Information about the modalities, scope, licenses (e.g. licencing framework
for research and education, embargo periods, commercial exploitation, etc.) in
which the data and associated software produced and/or used in the project is
accessible should be provided.
## • ASSESSABLE and INTELLIGIBLE
The data and associated software produced and/or used in the project should be
easily assessable by and intelligible to third parties in contexts such as
scientific scrutiny and peer review (e.g. the minimal datasets are handled
together with scientific papers for the purpose of peer review, data is
provided in a way that judgments can be made about their reliability and the
competence of those who created them).
## • USEABLE beyond the original purpose for which it was collected
The data and associated software produced and/or used in the project should be
useable by third parties even long time after the collection of the data (e.g.
the data is safely stored in certified repositories for long term preservation
and curation; it is stored together with the minimum software, metadata and
documentation to make it useful; the data is useful for the wider public needs
and usable for the likely purposes of non-specialists).
## • INTEROPERABLE to specific quality standards
The data and associated software(s) produced and/or used in the project should
be interoperable allowing data exchange between researchers, institutions,
organisations, countries, etc.
Some examples of research data include:
* Documents (text, Word), spreadsheets
* Questionnaires, transcripts, codebooks
* Laboratory notebooks, field notebooks, diaries
* Audiotapes, videotapes
* Photographs, films
* Test responses, slides, artefacts, specimens, samples
* Collection of digital objects acquired and generated during the process of research
* Database contents (video, audio, text, images)
* Models, algorithms, scripts
* Contents of an application (input, output, logfiles for analysis software, simulation software, schemas)
* Methodologies and workflows
* Standard operating procedures and protocols.
In addition to the other records to manage, some kinds of data may not be
sharable due to the nature of the records themselves, or to ethical and
privacy concerns (e.g. preliminary analyses, drafts of scientific papers,
plans for future research, peer reviews, communication with partners, etc.).
Research data also do not include trade secrets, commercial information,
materials necessary to be held confidential by researcher until they are
published, or information that could invade personal privacy. Research records
that may also be important to manage during and beyond the project are:
correspondence, project files, technical reports, research reports, etc.
# Data sets of the SCORES project
Projects under Horizon 2020 are required to deposit the research data \- the
data, including associated metadata, needed to validate the results presented
in scientific publications as soon as possible; and other data, including
associated metadata, as specified and within the deadlines laid down in a data
management plan.
At the same time, projects should provide information (via the chosen
repository) about tools and instruments at the disposal of the beneficiaries
and necessary for validating the results, for instance specialised software(s)
or software code(s), algorithms, analysis protocols, etc. Where possible, they
should provide the tools and instruments themselves.
The types of data to be included within the scope of the SCORES Data
Management Plan shall as a minimum cover the types of data that is considered
complementary to material already contained within declared Project
Deliverables. In order to collect the information generated during the
Project, the template for data collection will be circulated periodically
every 12 months. The scope of this template is to detail the research results
that will be developed during the SCORES project detailing the kind of results
and how it will be managed. The responsibility to define and describe all non-
generic data sets specific to an individual work package is with the WP
leader.
## Data Set Reference and Name
Identifier for the data set to be produced. All data sets within this DMP have
been given a unique field identifier and are listed in the table 4 (List of
the SCORES project data sets and sharing strategy).
## Data Set Description
A data set is defined as a structured collection of data in a declared format.
Most commonly a data set corresponds to the contents of a single database
table, or a single statistical data matrix, where every column of the table
represents a particular variable, and each row corresponds to a given member
of the data set in question. The data set may comprise data for one or more
fields. For the purposes of this DMP data sets have been defined by generic
data types that are considered applicable to the SCORES project. For each data
set, the characteristics of the data set have been captured in a tabular
format as enclosed in table 4 (List of the SCORES project data sets and
sharing strategy).
## Standards & Metadata
Metadata is defined as “data about data”. It refers to structured information
that describes, explains, locates, and facilitates the means to make it easier
to retrieve, use or manage an information resource.
Metadata can be categorised in three types:
* Descriptive metadata describes an information resource for identification and retrieval through elements such as title, author, and abstract.
* Structural metadata documents relationships within and among objects through elements such as links to other components (e.g., how pages are put together to form chapters).
* Administrative metadata manages information resources through elements such as version number, archiving date, and other technical information for the purposes of file management, rights management and preservation.
There are a large number of metadata standards which address the needs of
particular user communities.
## Data Sharing
During the period, when the Project is live, the sharing of data shall be
defined by the configuration rules defined in the access profiles for the
project participants. Each individual project data set item shall be allocated
a character “dissemination classification” (i.e. public, or confidential) for
the purposes of defining the data sharing restrictions. The classification
shall be an expansion of the system of confidentiality applied to deliverables
reports provided under the SCORES Grant Agreement.
The above levels are linked to the “Dissemination Level” specified for all
SCORES deliverables as follows:
* PU Public
* CO Confidential, only for members of the consortium (including the Commission Services)
* EU-RES Classified Information: RESTREINT UE (Commission Decision 2005/444/EC)
* EU-CON Classified Information: CONFIDENTIEL UE (Commission Decision 2005/444/EC)
* EU-SEC Classified Information: SECRET UE (Commission Decision 2005/444/EC)
All material designated with a PU dissemination level is deemed uncontrolled.
In case the dataset cannot be shared, the reasons for this should be mentioned
(e.g. ethical, rules of personal data, intellectual property, commercial,
privacy-related, or security-related).
Data will be shared when the related deliverable or paper has been made
available at an open access repository. The expectation is that data related
to a publication will be openly shared. However, to allow the exploitation of
any opportunities arising from the raw data and tools, data sharing will
proceed only if all co-authors of the related publication agree. The Lead
author is responsible for getting approvals and then with FENIX assistance
sharing the data and metadata on Zenodo (www.zenodo.org), a popular repository
for research data. The Lead Author will also create an entry on OpenAIRE
(www.openaire.eu) in order to link the publication to the data. A link to the
OpenAIRE entry will then be submitted to the SCORES Website Administrator
(FENIX) by the Lead Author.
OpenAIRE is an EC/funded initiative designated to promote the open access
policies of the EC and help researchers, research officers and project
coordinators comply with them. OpenAIRE implements the Horizon 2020 Open
Access Mandate for publications and its Open Research Data Pilot and may be
used to reference both the publication and the data. Each EC project has its
own page on OpenAIRE, featuring project information, related project
publications and data sets, and a statistics section.
In case of any questions regarding the Open Access policy of the EC the
representatives of the National Open Access Desk for OpenAIRE in the
Netherlands should be contacted, i.e. Just de Leeuwe (TU/Delft; email:
[email protected]) and Elly Dijk (Data Archiving and Networked Services –
DANS; email: [email protected]).
**Figure 5. OpenAIRE website**
## Data archiving and preservation
Both Zenodo and OpenAIRE are purpose-built services that aim to provide
archiving and preservation of long-tail research data. In addition, the SCORES
website, linking back to OpenAIRE, is expected to be available for at least 2
years after the end of the Project. At the formal Project closure all the data
material that has been collated or generated within the Project and classified
for archiving shall be copied and transferred to a digital archive (Project
Coordinator responsibility).
The document structure and type definition will be preserved as defined in the
document breakdown structure and work package groupings specified. At the time
of document creation, the document will be designated as a candidate data item
for future archiving. This process is performed by the use of codification
within the file naming convention (see Section 15). The process of archiving
will be based on a data extract performed within 12 weeks of the formal
closure of the SCORES project.
The archiving process shall create unique file identifiers by the
concatenation of “metadata” parameters for each data type. The metadata index
structure shall be formatted in the metadata order. This index file shall be
used as an inventory record of the extracted files, and shall be validated by
the associated WP leader.
**Figure 6. ZENODO repository**
# Technical requirements of data sets
The applicable data sets are restricted to the following data types for the
purposes of archiving. The technical characteristics of each data set are
described in the following sections. The copy rights with respect to all data
types shall be subject to IPR clauses in the Grant Agreement, but shall be
considered to be royalty free. The use of file compression utilities, such as
“WinZip” is prohibited. No data files shall be encrypted.
## _**Engineering CAD drawings** _
The .dwg file format is one of the most commonly used design data formats,
found in nearly every design environment. It signifies compatibility with
AutoCAD technology. Autodesk created .dwg in 1982 with the launch of its first
version of AutoCAD software. It contains all the pieces of information a user
enters, such as: Designs, Geometric data, Maps, Photos.
## _**Static graphical images** _
Graphical images shall be defined as any digital image irrespective of the
capture source or subject matter. Images should be composed such to contain
only objects that are directly related to SCORES activity and do not breach
IPR of any third parties.
Image files are composed of digital data and can be of two primary formats of
“raster” or “vector”. It is necessary to represent data in the rastered state
for use on a computer display or for printing. Once rasterized, an image
becomes a grid of pixels, each of which has a number of bits to designate its
colour equal to the colour depth of the device displaying it. The SCORES
project shall only use raster based image files. The allowable static image
file formats are JPEG and PNG.
There is normally a direct positive correlation between image file size and
the number of pixels in an image, the colour depth, or bits per pixel used in
the image. Compression algorithms can create an approximate representation of
the original image in a smaller number of bytes that can be expanded back to
its uncompressed form with a corresponding decompression algorithm. The use of
compression tools shall not be used unless absolutely necessary.
## _**Animated graphical images** _
Graphic animation is a variation of stop motion and possibly more conceptually
associated with traditional flat cell animation and paper drawing animation,
but still technically qualifying as stop motion consisting of the animation of
photographs (in whole or in parts) and other non-drawn flat visual graphic
material. The allowable animated graphical image file formats are AVI, MPEG,
MP4, and MOV. The WP leader shall determine the most suitable choice of format
based on equipment availability and any other factors. This is mainly valid
for the SCORES project promo video, which is expected to contain animated
graphical images, infographics and on-site interviews.
### Table 2: Video formats
<table>
<tr>
<th>
**Format**
</th>
<th>
**File**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
MPEG
</td>
<td>
.mpg
.mpeg
</td>
<td>
MPEG. Developed by the Moving Pictures Expert Group. The first popular video
format on the web. Used to be supported by all browsers, but it is not
supported in HTML5 (See MP4).
</td> </tr>
<tr>
<td>
AVI
</td>
<td>
.avi
</td>
<td>
AVI (Audio Video Interleave). Developed by Microsoft. Commonly used in video
cameras and TV hardware. Plays well on Windows computers, but not in web
browsers.
</td> </tr>
<tr>
<td>
WMV
</td>
<td>
.wmv
</td>
<td>
WMV (Windows Media Video). Developed by Microsoft. Commonly used in video
cameras and TV hardware. Plays well on Windows computers, but not in web
browsers.
</td> </tr>
<tr>
<td>
QuickTime
</td>
<td>
.mov
</td>
<td>
QuickTime. Developed by Apple. Commonly used in video cameras and TV hardware.
Plays well on Apple computers, but not in web browsers. (See MP4)
</td> </tr>
<tr>
<td>
RealVideo
</td>
<td>
.rm
.ram
</td>
<td>
RealVideo. Developed by Real Media to allow video streaming with low
bandwidths. It is still used for online video and Internet TV, but does not
play in web browsers.
</td> </tr>
<tr>
<td>
Flash
</td>
<td>
.swf
.flv
</td>
<td>
Flash. Developed by Macromedia. Often requires an extra component (plug-in) to
play in web browsers.
</td> </tr>
<tr>
<td>
Ogg
</td>
<td>
.ogg
</td>
<td>
Theora Ogg. Developed by the Xiph.Org Foundation. Supported by HTML5.
</td> </tr>
<tr>
<td>
WebM
</td>
<td>
.webm
</td>
<td>
WebM. Developed by the web giants, Mozilla, Opera, Adobe, and Google.
Supported by HTML5.
</td> </tr>
<tr>
<td>
MPEG-4 or MP4
</td>
<td>
.mp4
</td>
<td>
MP4. Developed by the Moving Pictures Expert Group. Based on QuickTime.
Commonly used in newer video cameras and TV hardware. Supported by all HTML5
browsers. Recommended by YouTube.
</td> </tr> </table>
## _**Audio data** _
An audio file format is a file format for storing digital audio data on a
computer system. The bit layout of the audio data (excluding metadata) is
called the audio coding format and can be uncompressed, or compressed to
reduce the file size, often using lossy compression. The data can be a raw
bitstream in an audio coding format, but it is usually embedded in a container
format or an audio data format with defined storage layer. The allowable
animated audio file formats is MP3 or MP4. This is mainly valid for the SCORES
project promo video, which is expected to contain interviews with key
partners, voice over and music.
### Table 3: Audio formats
<table>
<tr>
<th>
**Format**
</th>
<th>
**File**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
MIDI
</td>
<td>
.midi
.mid
</td>
<td>
MIDI (Musical Instrument Digital Interface). Main format for all electronic
music devices like synthesizers and PC sound cards. MIDI files do not contain
sound, but digital notes that can be played by electronics. Plays well on all
computers and music hardware, but not
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
in web browsers.
</td> </tr>
<tr>
<td>
RealAudio
</td>
<td>
.rm
.ram
</td>
<td>
RealAudio. Developed by Real Media to allow streaming of audio with low
bandwidths. Does not play in web browsers.
</td> </tr>
<tr>
<td>
WMA
</td>
<td>
.wma
</td>
<td>
WMA (Windows Media Audio). Developed by Microsoft. Commonly used in music
players. Plays well on Windows computers, but not in web browsers.
</td> </tr>
<tr>
<td>
AAC
</td>
<td>
.aac
</td>
<td>
AAC (Advanced Audio Coding). Developed by Apple as the default format for
iTunes. Plays well on Apple computers, but not in web browsers.
</td> </tr>
<tr>
<td>
WAV
</td>
<td>
.wav
</td>
<td>
WAV. Developed by IBM and Microsoft. Plays well on Windows, Macintosh, and
Linux operating systems. Supported by HTML5.
</td> </tr>
<tr>
<td>
Ogg
</td>
<td>
.ogg
</td>
<td>
Theora Ogg. Developed by the Xiph.Org Foundation. Supported by HTML5.
</td> </tr>
<tr>
<td>
MP3
</td>
<td>
.mp3
</td>
<td>
MP3 files are actually the sound part of MPEG files. MP3 is the most popular
format for music players. Combines good compression (small files) with high
quality. Supported by all browsers.
</td> </tr>
<tr>
<td>
MPEG-4 or MP4
</td>
<td>
.mp4
</td>
<td>
MP4. Developed by the Moving Pictures Expert Group. Based on QuickTime.
Commonly used in newer video cameras and TV hardware. Supported by all HTML5
browsers. Recommended by YouTube.
</td> </tr> </table>
## _**Textual data** _
A text file is structured as a sequence of lines of electronic text. These
text files shall not contain any control characters including end-of-file
marker. In principle, the least complicated form of textual file format shall
be used as the first choice.
On Microsoft Windows operating systems, a file is regarded as a text file if
the suffix of the name of the file is "txt". However, many other suffixes are
used for text files with specific purposes. For example, source code for
computer programs is usually kept in text files that have file name suffixes
indicating the programming language in which the source is written. Most
Windows text files use "ANSI", "OEM", "Unicode" or "UTF-8" encoding.
Prior to the advent of Mac OS X, the classic Mac OS system regarded the
content of a file to be a text file when its resource fork indicated that the
type of the file was "TEXT". Lines of Macintosh text files are terminated with
CR characters.
Being certified Unix, macOS uses POSIX format for text files. Uniform Type
Identifier (UTI) used for text files in macOS is "public.plain-text".
## _**Numeric data** _
Numerical Data is information that often represents a measured physical
parameter. It shall always be captured in number form. Other types of data can
appear to be in number form i.e. telephone number, however this should not be
confused with true numerical data that can be processed using mathematical
operators.
## _**Process and test data** _
Standard Test Data Format (STDF) is a proprietary file format originating
within the semiconductor industry for test information, but it is now a
Standard widely used throughout many industries. It is a commonly used format
produced for/by automatic test equipment (ATE). STDF is a binary format, but
can be converted either to an ASCII format known as ATDF or to a tab delimited
text file. Software tools exist for processing STDF generated files and
performing statistical analysis on a population of tested devices. SCORES
innovation development shall make use of this file type for system testing.
## _**Adobe Systems** _
Portable Document Format (PDF) is a file format developed by Adobe Systems for
representing documents in a manner that is independent of the original
application software, hardware, and operating system used to create those
documents. A PDF file can describe documents containing any combination of
text, graphics, and images in a device independent and resolution independent
format. These documents can be one page or thousands of pages, very simple or
extremely complex with a rich use of fonts, graphics, colour, and images. PDF
is an open standard, and anyone may write applications that can read or write
PDFs royalty-free. PDF files are especially useful for documents such as
magazine articles, product brochures, or flyers in which you want to preserve
the original graphic appearance online.
# GDPR compliance
At every stage, the SCORES project management and Project Consortium will
ensure that the Data Management Plan is in line with the norms of the EU and
Commission [as expressed in the General Data Protection Regulation (GDPR)
(Regulation (EU) 2016/679)] and will promote best practice in Data Management.
The GDPR comes into force on 25 May 2018\.
The responsibility of protection and use of personal data 2 is on the
Project partner collecting data. The questionnaire answers shall be anonymized
in as early stage of the process, and data making it possible to connect the
answers to individual persons shall be destroyed. The consent of the
questionnaire participant will be asked in all questionnaires conducted within
the SCORES project. This will include a description how and why the data is to
be used. The consent must be clear and distinguishable from other matters and
provided in an intelligible and easily accessible form, using clear and plain
language. It must be as easy to withdraw consent as it is to give it.
The questionnaire participants will not include children or other groups
requiring a supervisor. Also when asking for somebody’s contact information,
the asking party shall explain why this information is asked and for what
purposes it will be used.
## Controller and Processor
Controller means the natural or legal person, public authority, agency or
other body which, alone or jointly with others, determines the purposes and
means of the processing of personal data.
Processor refers to a natural or legal person, public authority, agency or
other body which processes personal data on behalf of the controller.
## Data Protection Officer
The Data Protection Officer (DPO) is responsible for overseeing data
protection strategy and implementation to ensure compliance with GDPR
requirements. Under the GDPR, there are three main scenarios where the
appointment of a DPO by a controller or processor is mandatory:
* The processing is carried out by a public authority
* The core activities of the controller or processor of processing operations which require regular and systematic processing of data subjects on a large scale; or
* The core activities of the controller or processor consist of processing on a large scale of sensitive data or data relating to criminal convictions / offences.
Each SCORES partner shall assess its own data processing activities to
understand whether they fall within the scope of the requirements set out
above. If they do, then it will be important to either fulfil the DPO position
internally or from an external source. For those organizations to whom the
requirements do not apply, they may still choose to appoint a DPO. If they
choose not to appoint a DPO, then it is recommended to document the reasoning
behind that decision.
## Data protection
European citizens have a fundamental right to privacy. In order to protect
this right of individual data subject, the anonymisation and pseudonymisation
can be used.
Anonymization refers to personal data processing with the aim of irreversibly
preventing the identification of the individual to whom it relates. For the
anonymized types of data, the GDPR does not apply, as long as the data subject
cannot be re-identified, even by matching his/her data with other information
held by third parties.
Pseudonymization refers to the personal data processing in such a manner that
the data can no longer be attributed to a specific data subject without the
use of additional information. 3 To pseudonymize a data set, the additional
information must be kept separately and subject to technical and
organizational measures to ensure non/attribution to an identified or
identifiable person. In other words, the pseudonymized data constitutes the
basic privacy-preserving level allowing for some data sharing, and represent
data where direct identifiers (e.g. names) or quasi-identifiers (e.g. unique
combinations of date and zip codes) are removed and data is mismatched with a
substitution algorithm, impeding correlation of readily associated data to the
individual’s identity. For such data, GDPR applies and appropriate compliance
must be ensured.
Due to the limited amount and less harmful nature of the personal data that is
collected within the SCORES project, neither pseudonymisation nor
anonymisation will be used. Other means of data security will be used to
protect data collected in the framework of the Project.
## Breach Notification
Under the GDPR, breach notification will become mandatory in all member states
where a data breach is likely to “result in a risk for the rights and freedoms
of individuals”. This must be done within 72 hours of first having become
aware of the breach. Data processors will also be required to notify the data
subjects and the controllers, “without undue delay” after first becoming aware
of a data breach.
## Right to be Forgotten
Also known as Data Erasure, the right to be forgotten entitles the data
subject to have the data controller erase his/her personal data, cease further
dissemination of the data, and potentially have third parties halt processing
of the data. The conditions for erasure include the data no longer being
relevant to original purposes for processing, or a data subjects withdrawing
consent. It should also be noted that this right requires controllers to
compare the subjects' rights to "the public interest in the availability of
the data" when considering such requests. If a data subject wants his/her
personal data to be removed from a questionnaire, the non-personal data shall
remain in the analysis of the questionnaire.
## Data portability
GDPR introduces data portability which refers to the right for a data subject
to receive the personal data concerning them, which they have previously
provided in a 'commonly use and machine readable format' and have the right to
transmit that data to another controller.
The personal data collected within SCORES project will be in electronic form,
mostly in Microsoft Excel file forms .xls or .xlsx. In case the data subject
requests to transmit his/her data to another controller there should be no
technical limitations for providing them.
## Privacy by design and by default
Privacy by design refers to the obligation of the controller to implement
appropriate technical and organisational measures, such as pseudonymisation,
which are designed to implement data protection principles, such as data
minimisation, in an effective manner and to integrate the necessary safeguards
into the processing.
Privacy by default means that the controller shall implement appropriate
technical and organisational measures for ensuring that only personal data
which are necessary for each specific purpose of the processing are processed.
That obligation applies to:
* the amount of personal data collected,
* the extent of personal data processing, • the period of personal data storage, and
* the accessibility of personal data.
In particular, such measures shall ensure that by default personal data are
not made accessible without the individual’s intervention to an indefinite
number of natural persons. 4
Personal data collected during the SCORES project will be used only by project
partners, including linked third parties, and only for purposes needed for the
implementation of the project. Also within the SCORES project, if someone of
the project consortium asks for personal data, the partner holding the data
should consider whether that data is needed for the implementation of the
Project. If personal data is provided, the data shall not be distributed
further within or outside the Project.
## Records of processing activities
Records of data processing and plans for the use of data will be kept by the
WP Leaders of those work packages that collect personal data.
# Naming convention
Every document submitted to the SCORES Sharepoint site is named in accordance
with the SCORES Sharepoint User and Security Instruction as follows:
[ **Company Name** ]-SCORES-[ **Doc Type** ]-[ **Doc No.** ]-[ **Issue No.**
]_[ **Title** ]
Where:
* [ **Company Name** ] is the name of the project partner responsible for issuing the document
* [ **Doc Type** ] is the type of the document such as:
* RP: Report
* LI: list or excel
* PROC: Procedure o MOM: Minutes of Meeting o ECM: Engineering Coordination Memo o HO: Hand-out or presentation
* PL: Plan o TR: Test Report o TP: test Plan
* NDA: Non-Disclosure Agreement o LT: Letter o SC: Schedule o AG: Agenda o ABS: Abstract
* FI: Film, movie
* PPR: PROJECT PERIODIC REPORT
* PA: Paper, article o ST: Sticker
* [ **Doc No.** ] is a unique number of each document following the previous document
* [ **Issue No.** ] is the number of document issue
* [ **Title** ] is the title of the document that should be clear and meaningful and without abbreviation
# Expected research data of the SCORES project
Expected research data of the SCORES project is listed below. The table
template will be circulated periodically in order to monitor the data sets and
set the strategy for their sharing.
## **Table 4: List of the SCORES project data sets and sharing strategy**
<table>
<tr>
<th>
**WP number and name**
</th>
<th>
**WP**
**lead**
</th>
<th>
**Task number and name**
</th>
<th>
**Duration**
</th>
<th>
**Task lead**
</th>
<th>
**Dataset name**
</th>
<th>
**Dataset description**
</th>
<th>
**Format**
</th>
<th>
**Level** 5
</th> </tr>
<tr>
<td>
WP1
Management
</td>
<td>
TNO
</td>
<td>
Project management, financial and administrative management
</td>
<td>
M1-M48
</td>
<td>
TNO
</td>
<td>
Meeting plan
</td>
<td>
Consortium meeting plan report
</td>
<td>
.pdf
</td>
<td>
PU
</td> </tr>
<tr>
<td>
Quality assurance and
risk
management
plan
</td>
<td>
Report
</td>
<td>
.pdf
</td>
<td>
PU
</td> </tr>
<tr>
<td>
**Data Sharing**
</td>
<td>
• Public data: SCORES website
</td>
<td>
**Data Archiving and preservation**
</td>
<td>
Sharepoint
</td>
<td>
**Data management Responsibilities**
</td>
<td>
Pavol Bodis
</td> </tr> </table>
<table>
<tr>
<th>
**WP number and name**
</th>
<th>
**WP**
**lead**
</th>
<th>
**Task number and name**
</th>
<th>
**Duration**
</th>
<th>
**Task lead**
</th>
<th>
**Dataset name**
</th>
<th>
**Dataset description**
</th>
<th>
**Format**
</th>
<th>
**Level**
</th> </tr>
<tr>
<td>
WP2
Modelling and evaluation of the system added value and business opportunities
</td>
<td>
RINA
\-
C
</td>
<td>
Task 2.1 - Top-down modelling of the building energy system including local
renewable generation and
grid supply,
conversion, storage and consumption of energy
</td>
<td>
M1-M6
</td>
<td>
EDF
</td>
<td>
Results of techno-
economic modelling
</td>
<td>
Results of technoeconomic modelling
</td>
<td>
.pdf
</td>
<td>
CO
</td> </tr>
<tr>
<td>
Task 2.2 - Cost-benefit analysis of the integrated systems
</td>
<td>
M24-
M42
</td>
<td>
EDF
</td>
<td>
Report on costbenefit evaluation
</td>
<td>
Report on cost-benefit evaluation
</td>
<td>
.pdf
</td>
<td>
CO
</td> </tr>
<tr>
<td>
Task 2.3 - Business model and commercial deployment roadmap and strategy
</td>
<td>
M12-
M45
</td>
<td>
RINA
</td>
<td>
Market analysis on hybrid storage components
</td>
<td>
Market analysis
(identification of potential customers,
optimal scenarios for the technology
application and
competitors) related to the hybrid storage components
</td>
<td>
.pdf
</td>
<td>
PU
</td> </tr>
<tr>
<td>
Technological roadmap for the commercial deployment of
</td>
<td>
Development of technology and
commercialisation roadmap of the
</td>
<td>
.pdf
</td>
<td>
CO
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
the hybrid
storage system
</th>
<th>
demonstrated hybrid storage systems
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<th>
New business models for the
SCORES hybrid storage system
</th>
<th>
Development of differentiated business models for the
proposed hybrid storage systems
</th>
<th>
.pdf
</th>
<th>
CO
</th> </tr>
<tr>
<th>
Task 2.4 – Impact of the hybrid storage implementation on grid flexibility
</th>
<th>
M12-
M45
</th>
<th>
EDF
</th>
<th>
Report on impact of hybrid storage
implementation
on grid flexibility
</th>
<th>
Report on impact of hybrid storage
implementation on grid
flexibility
</th>
<th>
.pdf
</th>
<th>
CO
</th> </tr>
<tr>
<th>
Task 2.5 – Measures for
future standardization
</th>
<th>
M6-M45
</th>
<th>
RINA
</th>
<th>
Report on measures for future
standardization
</th>
<th>
Report on measures for future
standardization, analysis and
assessment of the
current legislative framework of
reference for the proposed hybrid energy system
</th>
<th>
.pdf
</th>
<th>
PU
</th> </tr>
<tr>
<td>
**Data Shar**
</td>
<td>
**ing**
</td>
<td>
* Confidential data: Sharepoint
* Public data: SCORES website
</td>
<td>
**Data Archiving and preservation**
</td>
<td>
Sharepoint,
Company server
</td>
<td>
**Data management Responsibilities**
</td>
<td>
Nicolò Olivieri
</td> </tr> </table>
<table>
<tr>
<th>
**WP number and name**
</th>
<th>
**WP**
**lead**
</th>
<th>
**Task number and name**
</th>
<th>
**Duration**
</th>
<th>
**Task lead**
</th>
<th>
**Dataset name**
</th>
<th>
**Dataset description**
</th>
<th>
**Format**
</th>
<th>
**Level**
</th> </tr>
<tr>
<td>
WP3
Enhancement of energy conversion technology
</td>
<td>
IPS
</td>
<td>
Task 3.1 – Electro-Thermal storage units for ambient air
comfort development
</td>
<td>
M1-M24
</td>
<td>
CAM
</td>
<td>
Storage charge and discharge cycles
</td>
<td>
Temperature and electrical power logs of charge and
discharge cycles of
SETS heat storage unit in the lab
</td>
<td>
.csv
</td>
<td>
CO
</td> </tr>
<tr>
<td>
Storage charge and discharge
cycles in real conditions
</td>
<td>
Temperature and electrical power logs of charge and
discharge cycles of
SETS heat storage unit in real conditions
</td>
<td>
.csv
</td>
<td>
CO
</td> </tr>
<tr>
<td>
Task 3.2 - Development of air to air heat pumps with PCM storage energy system
for
space heating
</td>
<td>
M1-M24
</td>
<td>
CAM
</td>
<td>
Storage charge and discharge cycles
</td>
<td>
Temperature and electrical power logs of charge and
discharge cycles of
SETS heat storage unit in the lab
</td>
<td>
.csv
</td>
<td>
CO
</td> </tr>
<tr>
<td>
Task 3.3 - Development of DHW production system based on water to water heat
pump coupled with PV/T collectors
</td>
<td>
M1-M24
</td>
<td>
HEL
</td>
<td>
System Design
</td>
<td>
Technical specifications on DHW subsystem powered
by Water to water heat pumps connected to
PVT solar collectors
Technical specifications on
</td>
<td>
.pdf
</td>
<td>
CO
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
interactions between
DHW and CLC subsystems
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<th>
System Design
</th>
<th>
Optimized Control strategy of the DHW
subsystem coupled to CLC
</th>
<th>
.pdf
</th>
<th>
CO
</th> </tr>
<tr>
<td>
**Data Shar**
</td>
<td>
**ing**
</td>
<td>
•
•
</td>
<td>
Confidential data:
Sharepoint
Public data: SCORES website, Zenodo,
OpenAir
</td>
<td>
**Data Ar and pres**
</td>
<td>
**chiving**
**ervation**
</td>
<td>
SCORES
project website,
Sharepoint
</td>
<td>
**Data management Responsibilities**
</td>
<td>
Cláudia Louro
</td> </tr> </table>
<table>
<tr>
<th>
**WP number and name**
</th>
<th>
**WP**
**lead**
</th>
<th>
**Task number and name**
</th>
<th>
**Duration**
</th>
<th>
**Task lead**
</th>
<th>
**Dataset name**
</th>
<th>
**Dataset description**
</th>
<th>
**Format**
</th>
<th>
**Level**
</th> </tr>
<tr>
<td>
WP4
Development
of electrical storage system using second-life Liion battery
</td>
<td>
FORSEE POWER
</td>
<td>
Task 4.1 – Sourcing and qualification of used EV
battery
</td>
<td>
M1-M12
</td>
<td>
FOP
</td>
<td>
Definition of the selection process
</td>
<td>
Report
</td>
<td>
.pdf
</td>
<td>
PU
</td> </tr>
<tr>
<td>
Batterie
Selection Final choice
</td>
<td>
Report
</td>
<td>
.pdf
</td>
<td>
CO
</td> </tr>
<tr>
<td>
Task 4.2 – Design and production of the repackaged
</td>
<td>
M13-
M24
</td>
<td>
FOP
</td>
<td>
Technical specifications and drawings of
</td>
<td>
Report
</td>
<td>
.pdf
</td>
<td>
CO
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
battery
</th>
<th>
</th>
<th>
</th>
<th>
the 2 batteries
cabinet (T4.2)
</th>
<th>
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<th>
Batteries cabinet (T4.2)
</th>
<th>
Demonstrator
</th>
<th>
</th>
<th>
CO
</th> </tr>
<tr>
<th>
Task 4.3 – Design and production of the converter cabinet
</th>
<th>
M13-
M24
</th>
<th>
SIE
</th>
<th>
Technical specifications
and drawings of
2\. the converter cabinet (T4.3)
</th>
<th>
Report
</th>
<th>
.pdf
</th>
<th>
CO
</th> </tr>
<tr>
<th>
Converters cabinet (T4.3)
</th>
<th>
Demonstrator
</th>
<th>
</th>
<th>
CO
</th> </tr>
<tr>
<th>
Task 4.4 – Integration between battery and converters controller, converter
controller and BEMS
</th>
<th>
M13-
M24
</th>
<th>
SIE
</th>
<th>
Technical manual of the
ESS
</th>
<th>
Report
</th>
<th>
.pdf
</th>
<th>
CO
</th> </tr>
<tr>
<th>
Task 4.5 – Integration with
the local environment
</th>
<th>
M19-
M24
</th>
<th>
RINA
</th>
<th>
End of Life
</th>
<th>
End of life data about second life batteries
</th>
<th>
.xls
</th>
<th>
CO
</th> </tr>
<tr>
<td>
**Data Shar**
</td>
<td>
**ing**
</td>
<td>
* Confidential data: Sharepoint
* Public data: SCORES website, Zenodo,
OpenAir
</td>
<td>
**Data Archiving and preservation**
</td>
<td>
Sharepoint, FOP’s server
</td>
<td>
**Data management Responsibilities**
</td>
<td>
Julien Sarazin
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th> </tr> </table>
<table>
<tr>
<th>
**WP number and name**
</th>
<th>
**WP**
**lead**
</th>
<th>
**Task number and name**
</th>
<th>
**Duration**
</th>
<th>
**Task lead**
</th>
<th>
**Dataset name**
</th>
<th>
**Dataset description**
</th>
<th>
**Format**
</th>
<th>
**Level**
</th> </tr>
<tr>
<td>
WP5
Optimization of heat storage technology based on Chemical
Looping
Combustion
</td>
<td>
TNO
</td>
<td>
Task 5.1 – Design of CLC storage with the focus on energy density, reliability
and costs
</td>
<td>
M1-M18
</td>
<td>
TNO
</td>
<td>
System design
</td>
<td>
Report
</td>
<td>
.pdf
</td>
<td>
CO
</td> </tr>
<tr>
<td>
Task 5.2 - Enhancement of
the CLC storage subsystem
</td>
<td>
M1-M26
</td>
<td>
TNO
</td>
<td>
Control strategy
</td>
<td>
Report
</td>
<td>
.pdf
</td>
<td>
CO
</td> </tr>
<tr>
<td>
Experimental data
</td>
<td>
Report
</td>
<td>
.pdf
</td>
<td>
CO
</td> </tr>
<tr>
<td>
Task 5.3 - Building of the
CLC storage subsystem
</td>
<td>
M13-
M26
</td>
<td>
KMG
</td>
<td>
CLC Design
</td>
<td>
Drawings
</td>
<td>
.pdf
</td>
<td>
CO
</td> </tr>
<tr>
<td>
CLC setup
</td>
<td>
Photographs
</td>
<td>
.ipg
</td>
<td>
PU
</td> </tr>
<tr>
<td>
Task 5.4 - Validation of the
CLC storage subsystem
</td>
<td>
M19-
M30
</td>
<td>
TNO
</td>
<td>
Experimental data
</td>
<td>
Report
</td>
<td>
.pdf
</td>
<td>
CO
</td> </tr> </table>
<table>
<tr>
<th>
**Data Sharing**
</th>
<th>
•
•
</th>
<th>
Confidential data:
Sharepoint
Public data: SCORES website
</th>
<th>
**Data Archiving and preservation**
</th>
<th>
Sharepoint
</th>
<th>
**Data management Responsibilities**
</th>
<th>
Pavol Bodis
</th> </tr> </table>
<table>
<tr>
<th>
**WP number and name**
</th>
<th>
**WP**
**lead**
</th>
<th>
**Task number and name**
</th>
<th>
**Duration**
</th>
<th>
**Task lead**
</th>
<th>
**Dataset name**
</th>
<th>
**Dataset description**
</th>
<th>
**Format**
</th>
<th>
**Level**
</th> </tr>
<tr>
<td>
WP6 Energy management system and
(electrical) system
integration
</td>
<td>
SIEMENNS
</td>
<td>
Task 6.1 – Definition of requirements, design and implementation of a building
energy system (BEMS) that interfaces
with the package units, the external energy infrastructure, the metering of
the demand of electricity and heat, and external sources of information
</td>
<td>
M1-M12
</td>
<td>
EDF
</td>
<td>
Requirements report for the
BEMS
</td>
<td>
Requirements report for the BEMS
</td>
<td>
.pdf
</td>
<td>
CO
</td> </tr>
<tr>
<td>
Task 6.2 – Definition of subsystem controls and
system algorithm
</td>
<td>
M1-M12
</td>
<td>
TNO
</td>
<td>
Control strategies of the
SCORES
subsystems
</td>
<td>
Report
</td>
<td>
.pdf
</td>
<td>
CO
</td> </tr>
<tr>
<td>
Models of the
SCORES
subsystems
</td>
<td>
Report
</td>
<td>
.pdf
</td>
<td>
CO
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
Task 6.3 – Definition of interfaces (including data communication protocol)
that supports the selected subsystems,
external energy infrastructure, metering of the demand of electricity and
heat, and relevant sources of information
</th>
<th>
M1-M12
</th>
<th>
SIE
</th>
<th>
BEMS
requirements
and preliminary design document
</th>
<th>
Report
</th>
<th>
.pdf
</th>
<th>
CO
</th> </tr>
<tr>
<th>
Task 6.4 – Implementation of the BEMS including control loops, system
algorithm and interfaces
</th>
<th>
M10-
M22
</th>
<th>
SIE
</th>
<th>
Detailed Design document of
BEMS
</th>
<th>
Report
</th>
<th>
.pdf
</th>
<th>
CO
</th> </tr>
<tr>
<th>
Task 6.5 – Remote
monitoring and optimalisation
</th>
<th>
M13-
M48
</th>
<th>
AEE
</th>
<th>
Measurement
plan
</th>
<th>
Report on measurement equipment
</th>
<th>
.pdf
</th>
<th>
CO
</th> </tr>
<tr>
<th>
Measurement data
</th>
<th>
Data from measurement
</th>
<th>
.csv
</th>
<th>
CO
</th> </tr>
<tr>
<th>
Optimization plan
</th>
<th>
Report on optimization measures
</th>
<th>
.pdf
</th>
<th>
CO
</th> </tr>
<tr>
<th>
Measurement
data
</th>
<th>
Measurement data for optimized system
</th>
<th>
.csv
</th>
<th>
CO
</th> </tr>
<tr>
<th>
Quantification of optimization
</th>
<th>
Report on
Analysis/Comparison of optimization measures
</th>
<th>
.pdf
</th>
<th>
CO
</th> </tr> </table>
<table>
<tr>
<th>
**Data Sharing**
</th>
<th>
•
</th>
<th>
Confidential data: Sharepoint
</th>
<th>
**Data Archiving and preservation**
</th>
<th>
Sharepoint
</th>
<th>
**Data management Responsibilities**
</th>
<th>
Paola EnriquezOjeda
</th> </tr> </table>
<table>
<tr>
<th>
**WP number and name**
</th>
<th>
**WP**
**lead**
</th>
<th>
**Task number and name**
</th>
<th>
**Duration**
</th>
<th>
**Task lead**
</th>
<th>
**Dataset name**
</th>
<th>
**Dataset description**
</th>
<th>
**Format**
</th>
<th>
**Level**
</th> </tr>
<tr>
<td>
WP7
Demonstration of the integrated energy system including the innovative
technologies in an existing multifamily building connected to a district
heating grid
</td>
<td>
AEE INTEC
</td>
<td>
Task 7.1 – Requirements, design and simulation of the overall energy system
</td>
<td>
M1-M26
</td>
<td>
TNO
</td>
<td>
SCORES
System requirements
for DEMO-A
</td>
<td>
Requirements
</td>
<td>
.xls
</td>
<td>
CO
</td> </tr>
<tr>
<td>
Task 7.2 – System integration, installation, commissioning and decommissioning
including
all the components and the
BEMS
</td>
<td>
M18-
M29
</td>
<td>
HEL
</td>
<td>
Regulation verification
</td>
<td>
Regulation verification before installation in
Demo A
</td>
<td>
.pdf
</td>
<td>
CO
</td> </tr>
<tr>
<td>
Integration, installation,
commissioning plan
</td>
<td>
Integration, installation,
commissioning plan
</td>
<td>
.pdf
</td>
<td>
CO
</td> </tr>
<tr>
<td>
Photos of installed system
</td>
<td>
Photos of installed system
</td>
<td>
.jpg
</td>
<td>
PU
</td> </tr>
<tr>
<td>
Decommissioni ng plan
</td>
<td>
Decommissioning plan
</td>
<td>
.pdf
</td>
<td>
CO
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
Task 7.3 – System operation, testing and experiments
</th>
<th>
M25-
M29
</th>
<th>
AEE
</th>
<th>
Test & experiments plan
</th>
<th>
Plan containing which system tests and
experiments will be performed
</th>
<th>
.pdf
</th>
<th>
CO
</th> </tr>
<tr>
<th>
Measurement data
</th>
<th>
Data of the experimental measurement
</th>
<th>
.csv
</th>
<th>
CO
</th> </tr>
<tr>
<th>
Comparison of experimental data with simulation
</th>
<th>
Comparison of experimental data with simulation results
</th>
<th>
.pdf
</th>
<th>
CO
</th> </tr>
<tr>
<th>
Task 7.4 – System operation testing and experiments also for the case when
there is no heating-grid connection as an alternative side-scenario
</th>
<th>
M25-
M29
</th>
<th>
AEE
</th>
<th>
Test & experiments plan
</th>
<th>
Plan containing which system tests and
experiments will be performed
</th>
<th>
.pdf
</th>
<th>
CO
</th> </tr>
<tr>
<th>
Measurement data
</th>
<th>
Data of the experimental measurement
</th>
<th>
.csv
</th>
<th>
CO
</th> </tr>
<tr>
<th>
Comparison of experimental data with simulation
</th>
<th>
Comparison of experimental data with simulation results
</th>
<th>
.pdf
</th>
<th>
CO
</th> </tr>
<tr>
<th>
Task 7.5 - Evaluation of the technical, environmental and economic benefits of
the system
</th>
<th>
M40-
M48
</th>
<th>
EDF
</th>
<th>
Description of methodology
</th>
<th>
Description of evaluation methodology
</th>
<th>
.pdf
</th>
<th>
CO
</th> </tr>
<tr>
<th>
Report on technical,
economic and environmental
</th>
<th>
Report on technical, economic and environmental performances of
</th>
<th>
.pdf
</th>
<th>
PU
</th> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
performances of Demo A
</th>
<th>
Demo B
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
**Data Shar**
</td>
<td>
**ing**
</td>
<td>
•
•
</td>
<td>
Confidential data:
Sharepoint
Public Data: SCORES
website
</td>
<td>
**Data Ar and pres**
</td>
<td>
**chiving**
**ervation**
</td>
<td>
Sharepoint
</td>
<td>
**Data management Responsibilities**
</td>
<td>
Alexander
Thomas
Goritschnig
</td> </tr> </table>
<table>
<tr>
<th>
**WP number and name**
</th>
<th>
**WP**
**lead**
</th>
<th>
**Task number and name**
</th>
<th>
**Duration**
</th>
<th>
**Task lead**
</th>
<th>
**Dataset name**
</th>
<th>
**Dataset description**
</th>
<th>
**Format**
</th>
<th>
**Level**
</th> </tr>
<tr>
<td>
WP8
Demonstration of the integrated energy system including the innovative
technologies in an existing multifamily building with electric space heating
</td>
<td>
EDF
</td>
<td>
Task 8.1 – Requirements, design and simulation of the overall energy system
</td>
<td>
M1-M26
</td>
<td>
TNO
</td>
<td>
Building characteristics
</td>
<td>
Characteristics of the multifamily building hosting the demonstrator
</td>
<td>
.pdf
</td>
<td>
PU
</td> </tr>
<tr>
<td>
Subsystem performance and boundaries
</td>
<td>
Inventory of subsystem
performance and boundaries
</td>
<td>
.pdf
</td>
<td>
CO
</td> </tr>
<tr>
<td>
System breakdown
</td>
<td>
Chart of system breakdown
</td>
<td>
.jpg
</td>
<td>
CO
</td> </tr>
<tr>
<td>
Requirements of SCORES
Future System and Demo B
</td>
<td>
List of requirements to fulfill for the SCORES
Future System and the Demonstrator
</td>
<td>
.xls
</td>
<td>
CO
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
System design
</th>
<th>
Architecture of the system design in
Demo B
</th>
<th>
.pdf
</th>
<th>
CO
</th> </tr>
<tr>
<th>
SCORES
evaluation methodology
</th>
<th>
Description of the methodology for the performance
evaluation of the
Demo B
</th>
<th>
.pdf
</th>
<th>
CO
</th> </tr>
<tr>
<th>
Reference system
</th>
<th>
Description of the reference system
</th>
<th>
.pdf
</th>
<th>
CO
</th> </tr>
<tr>
<th>
Modelling inputs for simulation
</th>
<th>
List of modelling inputs for simulation
</th>
<th>
.pdf
</th>
<th>
CO
</th> </tr>
<tr>
<th>
Inputs for algorithm
implementation in simulation
</th>
<th>
List of inputs for algorithm
implementation in simulation
</th>
<th>
.pdf
</th>
<th>
CO
</th> </tr>
<tr>
<th>
Modelling architecture
</th>
<th>
Description of modelling architecture in software environment
</th>
<th>
.pdf
</th>
<th>
CO
</th> </tr>
<tr>
<th>
Report on expected performances
</th>
<th>
Report on expected performances of the
Demo B from the simulation
</th>
<th>
.pdf
</th>
<th>
CO
</th> </tr>
<tr>
<th>
Task 8.2 – System integration, installation, commissioning and decommissioning
including all the components and the
</th>
<th>
M18-
M29
</th>
<th>
HEL
</th>
<th>
Regulation verification
</th>
<th>
Regulation verification before installation in
Demo B
</th>
<th>
.pdf
</th>
<th>
CO
</th> </tr>
<tr>
<th>
Integration, installation,
</th>
<th>
Integration, installation,
</th>
<th>
.pdf
</th>
<th>
CO
</th> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
BEMS
</th>
<th>
</th>
<th>
</th>
<th>
commissioning plan
</th>
<th>
commissioning plan
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<th>
Photos of installed system
</th>
<th>
Photos of installed system
</th>
<th>
.jpeg
</th>
<th>
PU
</th> </tr>
<tr>
<th>
Decommissioni ng plan
</th>
<th>
Decommissioning plan
</th>
<th>
.pdf
</th>
<th>
CO
</th> </tr>
<tr>
<th>
Task 8.3 – Field test and measurements of the system performance
</th>
<th>
M25-
M42
</th>
<th>
AEE
</th>
<th>
Test plan
</th>
<th>
Test plan
</th>
<th>
.pdf
</th>
<th>
CO
</th> </tr>
<tr>
<th>
Measurement data
</th>
<th>
Measurement data
</th>
<th>
.csv
</th>
<th>
CO
</th> </tr>
<tr>
<th>
Comparison of field test data with simulation
</th>
<th>
Comparison of field test data with simulation
</th>
<th>
.pdf
</th>
<th>
CO
</th> </tr>
<tr>
<th>
Task 8.4 – Technic, economic and environmental
evaluation of the system
</th>
<th>
M40-
M48
</th>
<th>
EDF
</th>
<th>
Description of methodology
</th>
<th>
Description of evaluation methodology
</th>
<th>
.pdf
</th>
<th>
CO
</th> </tr>
<tr>
<th>
Report on technical,
economic and environmental
performances of
Demo B
</th>
<th>
Report on technical, economic and environmental
performances of
Demo B
</th>
<th>
.pdf
</th>
<th>
PU
</th> </tr>
<tr>
<td>
**Data Shar**
</td>
<td>
**ing**
</td>
<td>
* Confidential data: Sharepoint
* Public data: suitable platforms, SCORES website
</td>
<td>
**Data Archiving and preservation**
</td>
<td>
Sharepoint
</td>
<td>
**Data management Responsibilities**
</td>
<td>
Thuy-An Nguyen
</td> </tr> </table>
<table>
<tr>
<th>
**WP number and name**
</th>
<th>
**WP**
**lead**
</th>
<th>
**Task number and name**
</th>
<th>
**Duration**
</th>
<th>
**Task lead**
</th>
<th>
**Dataset name**
</th>
<th>
**Dataset description**
</th>
<th>
**Format**
</th>
<th>
**Level**
</th> </tr>
<tr>
<td>
WP9
Dissemination and
exploitation of results
</td>
<td>
FENIX TNT SRO
</td>
<td>
Task 9.1 – Dissemination and Communication
</td>
<td>
M1-M48
</td>
<td>
FEN
</td>
<td>
Communication and
Dissemination Plan
</td>
<td>
Report identifying target audiences, key messages,
communication
channels, roles and timelines
</td>
<td>
.pdf
</td>
<td>
PU
</td> </tr>
<tr>
<td>
Data
Management Plan
</td>
<td>
Report analysing the main data uses and
restrictions related to
IPR according to the
Consortium
Agreement
</td>
<td>
.pdf
</td>
<td>
PU
</td> </tr>
<tr>
<td>
Promo
materials (e.g.
Brochure, rollup, poster, project
presentation design)
</td>
<td>
Images and logos from project partners, photos/videos from
dissemination events, project promo videos
consisting of animated graphical images,
filming, voice over and music. Promo
materials shared online.
_The owner gives permission to FENIX_
</td>
<td>
eps,
.jpeg,
.png,
.mpeg,
.avi,
.mp4,
.pdf
</td>
<td>
PU
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
_to use images for dissemination_
_purposes of SCORES._
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<th>
Task 9.2 – Exploitation and
IPR management
</th>
<th>
M13-
M48
</th>
<th>
FEN
</th>
<th>
Exploitation
Plan
</th>
<th>
Identification of the key exploitable results, exploitable forms, competition,
risk
analysis, potential obstacles
</th>
<th>
.pdf
</th>
<th>
CO
</th> </tr>
<tr>
<th>
IPR Manual
</th>
<th>
Background knowledge and
existing patents
mapping, potentially overlapping IPR, optimal IPR protection
</th>
<th>
.pdf
</th>
<th>
CO
</th> </tr>
<tr>
<th>
Task 9.3 – Policy implications, workshops and
final conference
</th>
<th>
M18-
M48
</th>
<th>
EDF
</th>
<th>
Dissemination of results to targeted audience
</th>
<th>
Dissemination of results to targeted audience
</th>
<th>
.pdf
</th>
<th>
PU
</th> </tr>
<tr>
<th>
Promotion of workshop
</th>
<th>
Promotion of workshop
</th>
<th>
E-mail / Flyers
</th>
<th>
PU
</th> </tr>
<tr>
<th>
Workshop medium for information
</th>
<th>
Workshop medium for information
</th>
<th>
.ppt /
Posters
</th>
<th>
PU
</th> </tr>
<tr>
<th>
Promotion of conference
</th>
<th>
Promotion of conference
</th>
<th>
E-mail / Flyers
</th>
<th>
PU
</th> </tr>
<tr>
<th>
Conference medium for information
</th>
<th>
Conference medium for information
</th>
<th>
.ppt / Posters
</th>
<th>
PU
</th> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
Task 9.4 – Training activities
</th>
<th>
M25-
M48
</th>
<th>
IPS
</th>
<th>
Report on training
activities
</th>
<th>
Report on the training activities, exploiting instructions,
processes and tools developed in the
framework of the Project.
</th>
<th>
.pdf
</th>
<th>
PU
</th> </tr>
<tr>
<th>
Task 9.5 – Social impact
</th>
<th>
M25-
M48
</th>
<th>
FEN
</th>
<th>
Potential social impact of the
Project and users
engagement
</th>
<th>
Report on social impact
</th>
<th>
.pdf
</th>
<th>
PU
</th> </tr>
<tr>
<td>
**Data Shar**
</td>
<td>
**ing**
</td>
<td>
* Confidential data: Sharepoint
* Promo material (PU): SCORES website, social network profiles, videos on YouTube, thematic portals
* Public reports: SCORES website
</td>
<td>
**Data Archiving and preservation**
</td>
<td>
Sharepoint and company server
</td>
<td>
**Data management Responsibilities**
</td>
<td>
Petra Colantonio
</td> </tr> </table>
# Publication
The SCORES Consortium is willing to submit papers for scientific/industrial
publication during the course of the SCORES project. In the framework of the
Dissemination and Communication Plan agreed by the GA, project partners are
responsible for the preparation of the scientific publications. As a general
approach, the project partners are responsible for the scientific publications
as well as for the selection of the publisher considered as more relevant for
the subject of matter. Each publisher has its own policies on self-archiving
(Green Open Access: researchers can deposit a version of their published work
into a subject-based repository or an institutional repository, Gold Open
Access: alternatively researcher can publish in an open access journal, where
the publisher of a scholarly journal provides free online access).
After the paper is published and license for open access is obtained, project
partner will contact Dissemination and Exploitation Manager (FENIX), who is
responsible for SCORES data management, and FENIX will upload the publication
into project website and deposit in the OpenAIRE repository ZENODO indicating
the project it belongs to in the metadata. Dedicated pages per project are
visible on the OpenAIRE portal.
For adequate identification of accessible data, all the following metadata
information will be included:
* Information about the grant number, name and acronym of the action: European Union (UE), Horizon 2020 (H2020), Innovation Action (IA), SCORES acronym, GA N° 766464
**www.scores**
**-**
**project.eu**
**SCORES**
**S**
elf
**C**
onsumption
**O**
f
**R**
enewable
**E**
nergy by hyb
rid
**S**
torage systems
Doc:
FEN
\-
SCORE
Issue:
2
Date:
4/30/2018
Page:
Pag
e
52
of
5
Deliverable:
Dissem. lvl:
D9.3
Public
* Information about the publication date and embargo period if applicable: Publication date, Length of embargo period
* Information about the persistent identifier (for example a Digital Object Identifier, DOI): Persistent identifier, if any, provided by the publisher (for example an ISSN number)
For more detailed rules and processes about OpenAIRE, ZENODO, it is possible
to find within FAQ on the link _https://www.openaire.eu/support/faq_ .
This document contains proprietary information. Copying
of (parts) of this document is forbidden without prior
permission. All rights reserved.
# Conclusion
This report contains the first release of the Data Management Plan for SCORES
project and it provides preliminary guidelines for the management of the
project results during the project and beyond. The Data Management related to
the data generation, storage and sharing has been addressed. The report will
be subject to revisions as required to meet the needs of the SCORES project
and will be formally reviewed at month 18, 36 and at the end of the project to
ensure ongoing fitness to the purpose.
**www.scores**
**-**
**project.eu**
**SCORES**
**S**
elf
**C**
onsumption
**O**
f
**R**
enewable
**E**
nergy by hyb
rid
**S**
torage systems
Doc:
FEN
\-
SCORE
Issue:
2
Date:
4/30/2018
Page:
Pag
e
53
of
5
Deliverable:
Dissem. lvl:
D9.3
Public
This document contains proprietary information. Copying
of (parts) of this document is forbidden without prior
permission. All rights reserved.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1312_CARBAT_766617.md
|
# Executive summary
This document is a deliverable of the CARBAT project funded by the European
Union’s H2020-FETOPEN2016-2017 Programme under Grant Agreement #766617. It
describes the initial Data Management Plan (DMP) whose purpose is to set out
the main elements of the CARBAT consortium data management policy for all data
generated within the project. This DMP follows the recommendations provided by
the European Commission through the “ _Guidelines on FAIR Data Management in
Horizon 2020_ ”. It is a living document to be updated at regular intervals
during the project. The final DMP will be available by the completion of the
project on month 36.
# Purpose of the CARBAT Data Management Plan
CARBAT partners aim on engaging in activities that will maximize
discoverability and the preservation of the data generated during the project,
ensuring an increased impact of the developed knowledge. Thus, the CARBAT
project participates in the Open Research Data Pilot launched by the European
Commission along with the Horizon 2020 programme, and as such has elaborated a
Data Management Plan (DMP) in agreement with the recommendations provided by
the European Commission.
The present DMP describes the key data management principles, notably in terms
of data standards and metadata, sharing, archiving and preservation, and is
organized per work package.
The project will use decentralized access to data, each partner being
responsible for long term storage of the data after the end of the project.
This structure will provide the required flexibility for handling IPR
requested by the partners. In the next section (“Data summary”), it is
explained which datasets are expected to be produced during the project for
every partner per work package.
The consortium has agreed to potentially publish with open access data
produced within the project, once those deemed suitable for Intellectual
Property Rights (IPR) protection have been duly identified and protected
through the specifically designed IPR Management Plan and Communication and
Dissemination Plan. To this end, the CARBAT consortium will consult the
concerned partner(s) before publishing data.
# Data summary
The main objective of CARBAT is to achieve proof-of-concept for a Ca metal
rechargeable battery with high energy density through the cooperative
development of high capacity cathode materials and optimization of reliable
and efficient non-aqueous electrolyte formulations. CARBAT will accomplish
this by combining scientific efforts and excellence in computational
screening, solid-state and coordination chemistry, materials science,
electrochemistry and battery engineering. All CARBAT partners have identified
the datasets expected to be generated during the different phases of the
project:
* WP1 (Management, IPR & Communication; Involved partners: CSIC): this WP concerns the management of the project, hence no scientific datasets will be generated. Besides, WP1 is in charge of disseminating the project results to both scientific community and society and will produce 7 reports in Portable Document Format (PDF).
* WP2 (Cathode development; Involved partners: UCM, CSIC, FRAUNHOFER): main outcome of this WP is the selection of a crystal structure, suitable as host for Ca 2+ , its synthesis, structural characterization and electrochemical testing. Expected generated datasets are: crystal structure datasets from DFT calculations (cif formats), output files from DFT calculations (VASP files format), synthesis protocols (.txt), SEM micrographs (.tiff), electrochemical protocols (.txt, Excel, Origin), electrochemical cycling data (ec-lab, Excel, Origin), compatibility and processability protocols (.txt). In addition, 5 reports (PDF) will be produced throughout the completion of the WP.
* WP3 (Electrolyte development; Involved partners: CHALMERS, CSIC, FRAUNHOFER): main outcome of this WP is the selection of an appropriate electrolyte formulation through computational and physicochemical characterization of its properties, and assessment of possible interactions with the cathode active materials by electrochemical cycling and FTIR measurements. Expected generated datasets are: ionic conductivities (.txt), density and viscosity data (.txt), log-files from DFT calculations (.log), sigma-files from COSMO-RS simulations (cosmo), electrochemical protocols (.txt, Excel), electrochemical cycling data (.txt, Excel, Origin) compatibility and processability protocols (.txt). In addition, 5 reports (PDF) will be produced throughout the completion of the WP.
* WP4 (Electrode processing and prototyping; Involved partners: CHALMERS, FRAUNHOFER): main objective of this WP is the assembly of demonstrator cell following the development of electrode processing protocols, validation of selected materials from WP2&3 and benchmarking of results _vs._ LIB technology. Expected generated datasets are: processing protocols (.txt), electrochemical protocols (.txt, Excel, Origin), electrochemical cycling data (.txt, Excel, Origin), benchmarking/sustainability tests (Excel, Origin). In addition, 3 reports (PDF) will be provided by the end of the WP.
A re-evaluation of the produced datasets, as well as their formats and sizes,
will be provided in the course of completion of each WP.
All CARBAT partners have identified the types of data that will be re-used in
the different phases of the project, and their sources:
* WP1: no re-used data.
* WP2: collection of crystal structures from ICSD and AMCSD databases; synthesis protocols from research articles.
* WP3: no re-used data.
* WP4: collection of models and processes from Ecoinvent database.
# FAIR Data
Research data created within the CARBAT project is owned by the partner who
generates it (GA Art. 26). Each partner must disseminate its results as soon
as possible unless there is legitimate interest to protect the results. A
partner that intends to disseminate its results must give advance notice to
the other partners together with sufficient information on the results it will
disseminate (GA Art. 29.1). Relevant results will be directly shared between
the CARBAT consortium partners on demand, via the website intranet. The
following describes the proper management of data by each partner.
**CSIC** is involved in WP1, 2 and 3.
* CSIC is the lead beneficiary of the deliverable reports produced within WP1 and of the deliverable reports D2.2 and D2.4 produced within WP2. Reports with public dissemination level will be made available on the CARBAT project website, while reports identified as confidential will be available only for the members of the consortium (website intranet) and for the Commission Services (H2020 FET Open platform).
**UCM** is involved in WP1 and 2.
* UCM is the lead beneficiary of the deliverable reports D2.1 and D2.3 produced within WP2. Reports with public dissemination level will be available on the CARBAT project website, while reports identified as confidential will be available only for the members of the consortium (website intranet) and for the Commission Services (H2020 FET Open platform).
**CHALMERS** is involved in WP1, 3 and 4.
* CHALMERS is the lead beneficiary of the deliverable reports D3.1-3.4 produced within WP3 and D4.1 within WP4. Reports with public dissemination level will be available on the CARBAT project website, while reports identified as confidential will be available only for the members of the consortium (website intranet) and for the Commission Services (H2020 FET Open platform).
**FRAUNHOFER** is involved in WP1, 3 and 4.
* FRAUNHOFER is the lead beneficiary of the deliverable reports D2.5 produced within WP2, D3.5 within WP3 and D4.1-4.2 within WP4. Reports with public dissemination level will be available on the CARBAT project website, while reports identified as confidential will be available only for the members of the consortium (website intranet) and for the Commission Services (H2020 FET Open platform).
Research data generated can be stored (after IPR protection and/or
publication) in the CSIC repository (DIGITAL.CSIC, which provides secure,
long-term storage for research data), and/or the Fraunhofer home repository
(FraunhoferePrints, which is an Open Access Server complying with standards
and services that promote data integrity, exchange and long term archiving).
Each deposited data file will be accompanied by a metadata file in order to
describe it, make it discoverable and traceable. When needed, embargo period
on sensitive data will be used, however information about the restricted data
will be published in the data repository, and details of when the data will
become available will be included in the metadata file.
The CARBAT partners engage on generating data in a standardised way (using
standard formats as described in “Data summary” section) in order to make the
data interoperable, which will ensure that the datasets can be understood,
interpreted and shared in isolation alongside accompanying metadata and
documentation.
A strategy for storage of the research data files after completion of the
project is being developed by the CARBAT consortium and will be included in
the DMP later.
All CARBAT partners are expecting to create highly publishable results during
the project, these results will be available in form of scientific articles.
Each partner is responsible for always providing Green Open access to research
publications they have produced and Gold Open access when possible.
Where a restriction on open access to data is necessary, attempts will be made
to make data available under controlled conditions to other individual
researchers. All the research data will be of the highest quality, have long-
term validity and will be well documented in order other researchers to be
able to get access and understand them after 5 years. If datasets are updated,
the partner that possesses the data has the responsibility to manage the
different versions and to make sure that the latest version is available in
the case of publically available data. Quality control of the data is the
responsibility of the relevant responsible partner generating the data.
# Allocation of resources
There are no immediate costs anticipated to make the data produced FAIR, since
during the CARBAT project the datasets will be deposited in the corresponding
Institution home repositories by each CARBAT partner. Any unforeseen costs
related to open access to research data in Horizon 2020 are eligible for
reimbursement throughout the duration of the project under the conditions
defined in the Article 6.2 of the Grant Agreement.
# Data security & Ethics
The CARBAT partners have access to institutional home repositories for long
term storage and preservation of the generated research data. The DIGITAL.CSIC
repository has been awarded the Data Seal of Approval (DSA), certifying that
this repository meets the national and international quality guidelines for
digital data archiving. All the DSA information is public and accessible
following the link :
_https://assessment.datasealofapproval.org/assessment_120/seal/html/_ .
The Fraunhofer-ePrints repository follows well documented data publication
policies, accessible at the following link:
_http://publica.fraunhofer.de/starweb/ep09/en/guide_2.htm_ .
Each CARBAT partner should respect the policies set out in this DMP. Datasets
have to be created, managed and stored appropriately and in line with European
Commission and local legislation. Dataset validation and registration of
metadata and backing up data for sharing through repositories is the
responsibility of the partner that generates the data in the WP.
Consortium partners will impose a strict policy on all employees, co-workers,
subcontractors, etc. having access to the data. This policy will include, but
is not limited to: contractual clauses, agreement to terms and conditions
before access is granted, allowing copies on local devices only during
processing of the data with guaranteed erasure after being processed, etc.
The CARBAT project does not involve the use of human participants or personal
data in the research and therefore there is no requirement for ethical review.
# Conclusions
This Data Management Plan provides an overview of the data expected to be
produced in the course of completion of the CARBAT project and its proper
management (according to the FAIR principles), together with related
constraints that need to be taken into consideration.
All project partners will be producers and owners of data, which implies
specific responsibilities in terms of data quality, accessibility and
preservation, described in the present document. Specific attention will be
given to ensuring that the data made public breaks neither confidentiality nor
partner IPR rules.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1313_FLASH_766719.md
|
# 1\. Executive Summary
This document is a deliverable of the FLASH Project which is funded by the
European Union’s H2020 Programme under Grant Agreement No. 766719\. This first
version of the Data Management Plan (DMP) describes the main elements of the
data management policy that will be used by the members of the Consortium with
regard to the data generated throughout the duration of the project.
The DMP is released in compliance with the H2020 FAIR (making data Findable,
Accessible and Interoperable) principle 103 and will be updated at months M18
and M36.
The data generated in FLASH will be mainly experimental and characterization
data, design data, computational modeling data, publications and project
documents and reports. FLASH will ensure open access to all the data necessary
to validate the research results of the project, including publications.
Research data linked to exploitable results will not be immediately available
in the open domain in case this may compromise its commercialization
prospects. If needed, embargo periods may be defined and specified in the
future versions of this document.
# 2\. Scope of the deliverable
The purpose of this document is to provide the plan for managing the data
generated and collected during the project. The DMP describes the data
management life cycle for all datasets to be collected, processed and/or
generated by FLASH. It covers:
* the handling of research data during and after the project;
* what data will be collected, processed or generated;
* what methodology and standards will be applied;
* whether data will be shared/made open and how;
* how data will be curated and preserved Following the EU’s guidelines regarding the DMP.
The DMP is a live document, updated during the project as illustrated in
Figure 1. We assume three incremental releases of the DMP: After the initial
release at M6, at months M18 and M36 (end of the project), respectively.
DMP Initial
release (M6)
DMP Second
release
(M18)
DMP Final
release
(M36)
**Figure 1: DMP Life Cycle.**
Any new version of the DMP will include all the information of the previous
release, together with the necessary updates/corrections. After the release
data of the new DMP, the information contained in the previous versions will
be considered obsolete.
# 3\. FLASH DMP
FLASH is a project that aims to develop a room-temperature THz laser
integrated on Si using CMOS technology-compatible processes and materials. The
laser, of quantum-cascade type, will be assembled using newly developed
conduction-band germanium-rich heterostructures. It will leverage on the non-
polar nature of Si and Ge crystal lattices to potentially enable room-
temperature operation and will emit > 1 mW power in the 1-10 THz range. The
members of the Consortium working on the project are: 1. Laboratory of
Mesoscopic Physics and Nanostructures-Department of Science, Roma Tre
University (Italy); 2. University of Glasgow-UGLA; 3. Innovations for high
performance microelectronics/Leibniz-Institute-IHP; 4. Quantum Optoelectronic
Group- Department of Physics ETH Zurich-ETH; 5. Nextnano GmbH-NXT.
FLASH vision of a DMP is inspired by the FAIR data Principles [1]. Therefore,
FLASH will consider the following approaches as far as applicable for
providing the open access to research data:
* published data available on the Web (whatever format) under an open license;
* use of non-proprietary formats (e.g. CSV or TSV)
* use of Metadata and Fact sheet to denote data
* link data to other data to provide context
The policy for open-access to the research data in FLASH will follow the basic
principle: “as open as possible, as closed as necessary” which can be
translated into two core principles:
1. The generated research data should generally be made as widely accessible as possible in a timely and responsible manner;
2. The research process should not be impaired or damaged by the inappropriate release of such data.
The FLASH consortium shall implement procedures that are in line with national
legislation of each consortium partner and in line with the European Union
standards. This DMP will apply to all data under FLASH consortium control. If
we shall strive to make data open, we cannot overrule limitations that partner
institutions put on data that they contribute (see e.g. the grant agreement).
The Governing Board (GB) will assess under strict criteria the nature of the
data and will give advice in order to establish their categorization as
**open** , **embargo,** or **restricted,** taking into account that the main
aim to disseminate the results of the research should be balanced with the
necessity to protect the interests of the partners involved in the project.
In that sense, data sets containing key information that could be patented for
commercial or industrial exploitation will be excluded for public distribution
and data sets containing key information that could be used by the research
team for publications will not be shared until the embargo period applied by
the publisher is over.
In particular, the open access to the research data can be denied in cases of:
* the results are commercially or industrially exploited,
* incompatibility with confidentiality and security issues
* protection of personal data – privacy,
* the disclosure is likely to jeopardise the achievement of the main aim of the action,
* other legitimate reason.
In the following, we define as **data set** either an individual file (such
the pdf file containing a deliverable or a report) or an ensemble of files
which are logically connected, like different measurements of the same sample
or sharing some physical observable. In the latter case, the files will be
bundled in a single zip file which will be labelled with a single metadata
etiquette.
All the data sets, regardless of their categorization, will be stored in the
consortium private-access data repository accessible through the SFTP/SSH
protocol. In addition, those categorized as open or embargo will be publicly
shared (in the case of embargo, after the embargo period is over) through the
public section of the project website ( _www.flash-project.eu_ ) and ZENODO
(https://zenodo.org/), an open access repository for all fields of science
that allows uploading any kind of data file formats, which is recommended by
the Open Access Infrastructure for Research in Europe (OpenAIRE).
For all the data sets a Fact sheet will be filled by the authors in order to
summarize the characteristics of each data set to give a quick understanding
of the content of the data to anyone that reads it.
In order to make those data sets that are publicly shared as discoverable and
accessible as possible the following aspects will be considered:
* The **Metadata** will have a key role in the improvement of the discoverability of what we upload to be open to the general public. Taking into account the huge amounts of information that can be found on the Internet, it is necessary to use a standard set of encoded labels in the websites where we store our data in order to make it easier for the search engines used by the browsers (such as Google, Bing, Yahoo…) to find it. There will be three types of metadata for each public data set: common metadata (related to the EU, H2020, name and number of the project), specific metadata (3 Keywords chosen by the authors of the data) and fact sheet metadata (related to the nature, origin, description, authors, potential interested groups, etc.). Considering the amount of data that will be foreseeably produced in the project, we shall gather similar data in compressed files, sharing common Fact sheet and Metadata (e.g. spectroscopic data of devices for different excitations)
* The **Standardization** of the names and formats of the files stored and uploaded will also improve the accessibility to the information, as we will describe in the following.
The FLASH consortium will take, within the boundaries above stated, the
appropriate measures so that the research data generated in the project is
easily discoverable, accessible, assessable and intelligible, useable beyond
the original purpose for which it was collected and interoperable to specific
quality standards.
# 4\. Metadata and standardization in FLASH
In the context of this document, metadata is organized information labelling a
data set and encoded in the code of the websites in order to facilitate
discovery and reuse of the information by third parties.
Three types of metadata will be defined for each data set:
1. **Fact sheet information** : As stated in the Section “5. Fact sheet information” of this document, for each data set the authors will have to fill a Fact sheet that allows anyone to quickly identify the content of the data set.
2. **Common metadata** : According to the “Guidelines on Open Access to Scientific Publications and Research Data in Horizon 2020” regarding the research data generated, the beneficiaries of the grants should follow Article 29.3 of the Grant Agreement which states that the bibliographic metadata must be in a standard format and must include all of the following terms:
1. European Union (EU);
2. Horizon 2020;
3. Name of the project: Far-Infrared Lasers Assembled using Silicon Heterostructures; d. Acronym: FLASH;
e. Grant number: 766719.
3\. **Specific metadata** : The authors will have the option to choose up to 3
Keywords that they consider relevant for the data set and can be of frequent
use if someone is searching for the kind of data contained on the data set.
Once the Fact sheet is fulfilled it will be sent with the data set to the
Website managers. The Website managers will use the information indicated by
the authors to complete the metadata of the data sets that are going to go
public. The metadata will not be use in those data sets that has been
categorized in the Fact sheet as “Restricted”.
In order to make the information accessible for internal and external users
and according to the good practices for “Open data” free file formats such as
PDF, OpenOffice, PNG (portable network graphics) and SVG (scalable vector
graphics) will be prioritized when uploading information.
Regarding the names of the files, short descriptive and consistent file names
will be key to make it easier to locate the information needed now and in the
future. The rules to name data set files are reported in Table 1:
<table>
<tr>
<th>
**Convention**
</th>
<th>
_**[time_stamp]_FLASH_[data type]_[data postfix]_[version].[file format]** _
</th>
<th>
</th> </tr>
<tr>
<td>
**Item**
</td>
<td>
Time Stamp
</td>
<td>
Data Type
</td>
<td>
Data type postfix
</td>
<td>
version
</td>
<td>
File format
</td> </tr>
<tr>
<td>
**Optional**
</td>
<td>
X
</td>
<td>
</td>
<td>
X
</td>
<td>
X
</td>
<td>
</td> </tr>
<tr>
<td>
**Definition**
</td>
<td>
YYYT_MM_DD
</td>
<td>
Design
Simulation
Measurement
Publication
Document
Report
Deliverable
Presentation
</td>
<td>
arbitrary
</td>
<td>
v#.#
</td>
<td>
According to software: pdf jpg zip
xlsx
docx pptx
…
</td> </tr>
<tr>
<td>
Specific
Equipment used
for acquiring, e.g.
AFM, SEM, etc.
</td> </tr>
<tr>
<td>
Project Meeting
</td> </tr>
<tr>
<td>
Type of model, Simulation, etc.
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
**Examples**
</td>
<td>
2018_03_31
</td>
<td>
DelirevableReport
</td>
<td>
D1.1
</td>
<td>
v1.1
</td>
<td>
As above
</td> </tr>
<tr>
<td>
**Complete**
</td>
<td>
_2018_03_31_FLASH_DelirevableReport_D1.1_v1.1.pdf_
</td>
<td>
</td> </tr> </table>
**Table 2: FLASH data file naming convention.**
# 5\. Fact Sheet information
For each data set the researcher will fill the Fact Sheet shown in Table 2.
The fields specified in that Fact Sheet should be filled according to the
following rules and recommendations.
## 5.1 Data set description
### 5.1.1 Reference
Each data set will have a reference that will be generated by the combination
of the name of the project, the Work Package and Task in which it is generated
and a consecutive number (15 characters maximum, for example: FLASH_T1.0_01).
### 5.1.2 Description
An intelligible description of the data collected, understandable for people
that do not directly work in the project, and independent from other data set
descriptions, so it can be understood without having to go through every data
set. (60 characters maximum).
5.1.3 Authors
The name of the Authors and the Entity will have to be completed.
### 5.1.4 Origin
The researchers will have to select the origin or origins of the data between
the next options:
* Laboratory experimental data;
* Computer simulation;
* Review;
* Design, drawings;
. Papers;
* Other, to be specify.
### 5.1.5 Nature
The researchers will have to select the nature of the data between the next
options:
* Documents (text, Word processors), spreadsheets;
* Laboratory notebooks, ;
* Type of model, simulation (i.e. Nextnano simulation of wavefunctions);
* Data type based on the equipment used (e.g. XRD, FTIR data)
#### 5.1.6 Sharing Status
The researchers will have to select the sharing status between the next
options:
* Open: Open for public disposal.
* Embargo: When a data set is published in a journal, it will become public following the embargo policies of the publisher.
* Restricted: Only for project internal use.
5.1.7 Required software for opening the file
The researchers will have to specify the software required or suggested for
opening the file.
### 5.1.8 Whether it underpins a scientific publication
The researchers will have to answer “Yes” or “No”, and in case the answer is
“Yes” they will have to give the reference and date to the mentioned
publication in the following format: _“NAME OF THE PUBLICATION.Year of
publication. DOI”._
<table>
<tr>
<th>
**Description**
</th>
<th>
**Reference**
</th>
<th>
</th> </tr>
<tr>
<th>
Description
</th>
<th>
</th> </tr>
<tr>
<th>
Authors
</th>
<th>
</th> </tr>
<tr>
<th>
Origin
</th>
<th>
</th> </tr>
<tr>
<th>
Nature
</th>
<th>
</th> </tr>
<tr>
<th>
Sharing status
</th>
<th>
</th> </tr>
<tr>
<th>
Required Software
</th>
<th>
</th> </tr>
<tr>
<th>
Whether underpins publication
</th>
<th>
</th> </tr>
<tr>
<td>
**Metadata**
</td>
<td>
Common metadata
</td>
<td>
European Union (EU)
</td> </tr>
<tr>
<td>
Horizon 2020
</td> </tr>
<tr>
<td>
Far-Infrared Lasers Assembled using Silicon Heterostructures
</td> </tr>
<tr>
<td>
FLASH
</td> </tr>
<tr>
<td>
Grant number: 766719
</td> </tr>
<tr>
<td>
Specific metadata
</td>
<td>
</td> </tr> </table>
**Table 2:** _Example of Data Fact Sheet_ **.**
# 6\. Type of data in FLASH
This section gives and overview about the research data which are generated,
collected, processed and stored in FLASH. This includes the data description
for different types and formats, purpose with respect to project objectives
and tasks, for data re-use and how, data origin, expected data size, and to
whom it might be useful.
Data generated in FLASH will be strictly digital. In general, the data file
formats to be used shall meet the following criteria:
* widely used and accepted as best practice within the specific discipline, • self-documenting, i.e. the digital file itself can include useful metadata,
* independent from specific platforms, hardware or software.
However, different types of data will be generated and handled in FLASH.
Considering the technical disciplines related to FLASH project, high-
technology equipment and processes are used. Therefore, most of the research
data will be in appropriate formats. In the following the main data types are
described:
## Design data
Design results are schematics and layouts of the active material and of device
components. The digital format of the designs is mostly appropriate (e.g. gds
files) to the special software (e.g. AutoCAD). Therefore, this software is
required for data re-use. Since, these softwares are only available with legal
licenses, they cannot be provided by consortium. This makes it impossible to
provide an open access to those design data. However, it will be possible to
provide pictures and screenshots from the designs which will be collected in
design reports or corresponding deliverable reports.
## Simulation data
Further data type is generated by collecting results from simulations. These
are used to evaluate and estimate the performance and properties of the
simulated device. For the simulations, special software is typically used,
which need special software licenses for research or even business use. Such
software includes for example COMSOL MULTIPHYSICS, MATLAB, Mathematica and the
software developed by one of the consortium members (nextnano). The format of
simulation results are for a big extent appropriate data set only usable with
the simulation software. In some cases simulation results can be exported as
ASCII coded text files or to typical database type of formats and spreadsheets
with a complete description of the data set (list of fields). These have to be
post processed by (simulation data) evaluation software. In most cases are
diagrams of the parameters of interest and screenshots the outcomes of
simulations. For these reasons it could be not useful to provide the raw
simulation data for open access in FLASH. It makes more sense to provide the
processed simulation results as diagrams and collected as well as explained in
project reports and open-access publications. In case of simulation text files
or spreadsheets, the data set could be put in a zip archive and be attached to
the report. However, simulation data will only made open after the results
have been published.
## Measurement data
These data are produced by laboratory experiments and hardware analysis. The
data are measurements of specific parameters related to the devices,
component, and material properties. Similarly, to simulation data, measurement
data are usually appropriate to the used equipment and corresponding software.
Sometimes, also typical database types of format or spreadsheets with a
complete description of the data set (list of fields) are available.
Therefore, also in case of measurement data, open access of raw data is not
worthwhile in FLASH. Detailed measurement data will be made accessible through
detailed Supplementary Information sections associated to published articles
and, after publication, via measurement reports which also describe in details
the measurement environment, e.g. specific performance parameters, test and
measurement equipment, experimental setups. If applicable database-based
measurement data sets will be attached as zip archive to the reports.
## Publications
The most open and visible way to disseminated the data set produced within
FLASH will be the publication of research results in scientific articles.
Publications are created by one partner for individual results or as joint
publication on joint research efforts and success. Publications are made in
scientific journal and on conferences, mostly in the format of pdf files which
are commonly usable. In the context of publication data, the open-access
approach of Horizon 2020 is embraced by FLASH, following the guidelines
presented by the Commission. We shall aim at publishing the project results
mainly in fee-based open access scientific journals, following the open-access
Gold methodology, and selecting those expected to give the largest visibility
of FLASH research activity. For this reason, costs for publication fees have
been foreseen in the consortium budget. It is anticipated that FLASH
researchers will also primarily target the open-access Green method in the
case of conferences and workshop contributions, since the two open-access
methods are non-mutually exclusive. In that case the published article or the
final peer-reviewed manuscript is archived by the researcher in an online
scientific repository (FLASH Website, Arxive, ZENODO repository, etc.) before,
after, or alongside its publication. In this case, the authors must ensure
open access to the publication within a time frame which is defined by the
publisher (embargo times are usually six months to one year).
## Project documents ad reports
A second major data set for open access will be project documents and reports,
such as deliverable reports. They are generated and collected to summarize the
project progress and results as well as discuss different approaches,
challenges and deviations with regard to the FLASH objectives. Reports can be
related to design, simulation and measurements and contains the processed
data. As long as the documents are not assigned to be confidential, contain
any confidential data, they are per se public. Normally, documents and reports
are in standard pdf format. Public FLASH documents and reports will be made
available and accessible on the FLASH webpage after their submission and
publication.
# 7\. FAIR data in FLASH
## 7.1 Making data findable
The research data from this project will be deposited both in:
* A dedicated website for the project: The domain of the website is **https://www.flash-project.eu/.**
* An open access repository: Best practices recommend using an open repository to ensure that the data can be found by anyone. The shared data sets of the FLASH project will be deposited in the ZENOBI repository (https://zenodo.org/). This is one of the free repositories recommended by the Open Access Infrastructure for Research in Europe (OpenAIRE) on their website, and it is an open repository for all fields of science that allows uploading any kind of data file formats.
Both repositories are prepared to share research data in different ways
according to how the partners decide the data should be shared:
_The dedicated website for the project_ : Information can be shared in the
website at two different levels:
* A private access intranet for internal management of research data.
Each participant of the project will have a username and a password that will
be mandatory to enter into the intranet and have access to all the information
shared using the SSH/SFTP protocol.
* A public section for the public access to final research data sets. As stated before in this document the data set shall be understood as aggregated data that can be analysed as a whole and has a conclusive and concrete result, and will not include laboratory notebooks, partial data sets, preliminary analyses, drafts of scientific paper. All the information that it is decided to be shared will have no access restriction.
_An open access repository_ : The same Website managers that post the data
sets in the public section of the website page from FLASH will simultaneously
post it in the open access repository. ZENODO allows to upload files under
restricted, open or embargoed access.
## 7.2 Making data accessible
Generally, all data which after publication can be made accessible. However,
as previously described, for most of the data types in FLASH appropriate
formats and special restrictions (e.g. software licenses, patents, …) are
present which prohibit free-access to the data. For this reason, it has been
decided in FLASH that the use of a public repository for raw data is not
beneficial at the moment. The data will be stored, however, in the intranet
section of the FLASH website. However, a special repository, which links
publications to research data, will be evaluated and selected when first data
is collected during project progress.
## 7.3 Interoperable data
In this regard, FLASH will use as wherever possible data formats for open-
access knowledge representation, which are formal, accessible, shared and in a
broadly applicable language. Qualified references to other data will be
included. For example, information on the tools and instruments, which are
needed to validate the measurement results, are provided with the data sets.
In particular, the format for data sets of equal content, e.g. measurement
data, will be a zip archive.
## 7.4 Re-usable data
Within FLASH. re-usability of data is ensured by the fact that all the data
which will be provided for openaccess, e.g. documentation, reports papers,
will be inherently public (and free of charge). Therefore, no special licenses
are established at the moment. However, although the data will public the
FLASH project and its consortium members reserve the copyright of the
material. For re-usability the data will be stored on the webpage or on a
repository system when implemented for at least ten years.
# 8\. Data Security
Data will be stored on the IHP server on which the webpage of the project is
located. The IHP repository is managed and support by a team of experts and
subject to the institute data security measures and backup. The costs are
covered by the IHP budget. Passwords are distributed separately. The original
copies of the data will be also stored in the database of the entities that
have created them.
# 9\. Ethical Aspects
In the FLASH project, there are no ethical or legal issues present which
impairs the data managing. The research in dimension does not create, process
and store personal data. Personal data of the FLASH consortium is no subject
of the project’s data management.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1314_Himalaia_766871.md
|
3) Reporting. The process of monitoring the data archives, and the ability to
pull out Key Performance Indicators that can inform decision making for
further improvements in the Data Management system
# Architecture Process Reporting
• Define data • Define data • Define repository archiving monitoring and
structure processes feedback strategies
A plan has been developed to provide detail in each of these areas as follows
:
1. **_Architecture_ **
Due to the high volume of data that is expected to be generated during the
project, all of the data will be required to be stored locally at each
partner’s site, but results of significance will be shared internally using a
common HIMALAIA repository and externally using standard online repositories,
as shown in Figure 2.
_Figure 2_
_–_
_Architecture of data storage in HIMALAIA_
Online
repository
Journals, FigS
hare, OpenAire
Local repository
1
Partner
Local repository
Partner
2
Local repository
Partner
3
HIMALAIA
repository
DB
Online Platform,
An index of the data available will be stored in a software database that will
be found within the HIMALAIA repository. This will give a snapshot of the
entirety of data generated as well as providing a searchable index and links
to the location of the actual data resource. This will also provide a useful
tool for monitoring the amount of generated data and significance for
reporting purposes.
2. **_Process_ **
The management of HIMALAIA data requires standardised and timely reporting of
data which can be shared within the project and with external stakeholders.
The standard process for handling new data generated during the project is
defined in Figure 1. This process will ensure that all data generated is
captured and can be made available to both internal and external stakeholders
if required. Stakeholders will also be able to see that data has been recorded
in advance of processing and analysis and can prepare activities in advance of
the publication of results.
Generate New Data in WP activity
Create new WHITE entry in data
Store in secure local archive
monitoring DB
Analyse data. Are the Results
significant?
Elevate to GREEN status. Copy to
HIMILAIA archive.
Update data
monitoring DB entry.
Yes
Are the Results significant to the
international community/worthy of publication?
Yes
Elevate to GOLD status. Copy to online archive. Publish on OpenAire. Update
data monitoring DB entry.
_Figure 3 – Workflow for newly generated data_
**3._Reporting_ **
Database monitoring tools will be used for extracting key KPIs from the
extensive data sets in the archive.
**II. HIMALAIA expected data streams and related data management**
It is important that the various data sets generated throughout the project
can be easily identified to facilitate the evaluation of the key
functionalities that result from each process chain. In order to achieve this,
there will be a fundamental identification code for each of the mould inserts
generated in WP2, and this code will serve as the first parameter in a unique
identification code for each set of data generated as the project evolves. The
code will be based on the date of production of the insert, the laser
parameters and the functionality.
For example:
**YYYYMMDD-xxxxxxxxxxx-AB**
The first 8 digits are the date of manufacture and the remaining numbers are
the laser manufacturing parameters. The letters represents the end user
application ( AB means antibacterial).
Further identification information will be appended to this root code for each
data stream as specified in this section.
## A. 2.1 Material Data
### 2.1.1 Interest of and use of materials data
Material data consists of reference data (processing settings, MSDS, physical
properties) and measured data (rheology curves, Differential Scanning
Calorimetry data, Thermogravimetric analysis data). These will be used for
defining initial process conditions, investigating the influence of material
properties on replication performance and as input data for simulation
studies.
### 2.1.2 Data format, availability and management
Typically single figure parameter values and x,y curves. These have no
significant storage requirement.
ID format:
MatID_TestID_ **YYYYMMDD** _RunNumber Where:
MatID is a four-digit alphanumeric code indicating the material Grade and
Batch Number
TestID is a 3-digit alphanumeric code identifying the test or data sheet type
**YYYYMMDD** is the date the data was recorded
RunNumber is a two-digit numeric code indicating the specific run number.
## B. 2.2 Processing parameters data
### 2.2 Interest of and use of processing parameters data
Processing parameter data consists of machine settings and configurations used
in a process chain. In HIMALAIA such data will be generated extensively for
the surface structuring processes and the injection moulding trials.
#### 2.2.2 Data format, availability and management
Typically, single figure values will be defined for each parameter. These have
no significant storage requirement.
ID format:
InsertCode_ProcessID_ **YYYYMMDD** _RunNumber Where:
InsertCode is as defined earlier in this section.
ProcessID is a two-digit alphanumberic code where the first digit indicates
the machine used to perform the process and the second digit indicates the
process variant.
**YYYYMMDD** is the date on which the process was perfomed.
RunNumber is a two-digit numeric code indicating the specific run number.
## C. 2.3 Process Characterisation data
### 2.3.1 Interest of and use of processing characterisation data
Process characterisation data consists of sensor and machine response
measurements that measure the processing environment with great accuracy and
provide much more sensitive data than process parameters alone for
understanding a manufacturing process. The data can be used as input data into
models that can predict process performance.
### 2.3.2 Data format, availability and management
Typically single figure parameter values and arrays of x,y curves. These
typically have a medium range storage requirement.
ID format:
InsertCode_ProcessID_YYMMDD_RunNumber Where:
InsertCode, ProcessID, RunNumber and YYMMDD are defined earlier in this
section.
## D. 2.4 High speed image data
### 2.4.1 Interest of and use of processing image data
High speed visible and IR camera acquisitions of processes and experiments in
the HIMALAIA project give an increased understanding of fundamental behaviour
and enable comparison with simulation results.
### 2.4.2 Data format, availability and management
Very large file sizes require a significant amount of data storage.
ID format:
CamID_InsertCode_ProcessID_YYMMDD_RunNumber Where:
CamID is a two digit alphanumeric code indicating camera type (IR, VS)
InsertCode, ProcessID, RunNumber and YYMMDD are defined earlier in this
section.
## E. 2.5 Surface Characterisation Data
### 2.5 Interest of and use of surface characterisation data
Surface characterisation data is key for the success of the HIMALAIA project
as it assesses the surface topology of the manufactured components that will
exhibit the advanced functionalities. The data will be used throughout the
project to explore the links between functionality, on the one hand, and
materials, tooling and processes on the other.
**_2.5.2 Data format, availability and management_ ** x,y,z data sets, can be
quite large for large areas. These have a medium storage requirement.
ID format:
SurfID_InsertCode_ProcessID_YYMMDD_RunNumber
SurfID is a two digit alphanumeric code indicating surface measurement type.
InsertCode, ProcessID, RunNumber and YYMMDD are defined earlier in this
section.
## F. 2.6 Antimicrobial functionality data
### 2.6.1 Interest of and use of antimicrobial functionality data
Antimicrobial efficacy data is key for the success of the HIMALAIA project
-and in particular for the industrial partners that their applications relate
to this; EO, ALBEA- as it assesses how the surface topology of the
manufactured components affect bacterial adhesion, viability and biofilm
formation on the manufactured components and whether certain topology can
bring about advanced functionalities. The data will be used throughout the
project to explore the links between antimicrobial efficacy and materials
topology towards advanced functional surfaces.
### 2.6.2 Data format, availability and management
These x,y,z data sets, can be quite large for large areas and have medium
storage requirement.
ID format:
AMID_InsertCode_ProcessID_YYMMDD_RunNumber
AMID is a two-digit alphanumeric code indicating antimicrobial efficacy
testing type.
InsertCode, ProcessID, RunNumber and YYMMDD are defined earlier in this
section.
## G. 2.7 Anti-squeak and anti-scratch functionality data
### 2.7.1 Interest of and use of anti-squeak functionality data
Anti-squeak and anti-scratch functionalities are key for the success of the
HIMALAIA project, notably for the automotive applications. Anti-squeak tests
assess how the surface topology of the manufactured components affect the
noisiness of the parts when they rub against adjacent parts (stick-slip
motion). The anti-scratch tests determine to what extent surfaces are
scratched, scuffed or marred under various circumstances.
### 2.6.2 Data format, availability and management
These tests are typical performed on limited areas and have no significant
storage requirement.
ID format:
ASID_InsertCode_ProcessID_YYMMDD_RunNumber
ASID is a two digit alphanumeric code indicating which antisqueak or anti-
scratch test was applied.
InsertCode, ProcessID, RunNumber and YYMMDD are defined earlier in this
section.
## H. 2.8 Optical appearance data
### 2.8 Interest of and use of processing image data
Intriguing optical functionalities are key for the aesthetic appeal of
packaging, notably for cosmetic applications in Himalaia. Optical tests assess
how the surface topology of the manufactured components affects the visual
effects of the parts (butterfly wing effects, holograms, other intriguing
effects).
#### 2.8.2 Data format, availability and management
These tests provide x,y,z data sets and are typical performed on limited
areas. They have a medium storage requirement.
ID format:
OID_InsertCode_ProcessID_YYMMDD_RunNumber
OID is a two digit alphanumeric code indicating which optical test was
applied.
InsertCode, ProcessID, RunNumber and YYMMDD are defined earlier in this
section.
<table>
<tr>
<th>
**III.**
</th>
<th>
**3**
</th>
<th>
**Conclusions**
</th> </tr> </table>
This document outlines the system for data management in the HIMALAIA
programme. It defines a standard for metadata that enables reuse and
understanding of Himalaia datasets. It also outlines the architecture for the
storage of data locally, via the shared online project database and via open
platforms as well as a process for evaluating which type of data is to be
stored where.
**Appendix 1. Deliverables and milestones focused on data in HIMALAIA.**
**WP**
WP1
WP2 **Objectives** **related to data**
* Surface-functionalision
* Replication platform
* In-line, model-based functional characterisation
* Development of laser-based technologies for producing replication inserts with functional micron and sub-micron surface structures/patterns;
* Advancement and adaptation of the use of hexagonal closed packed (HCP) microsphere arrays as laser near-field microlenses, called “photonic nanojet”, for functional texturing/patterning of replication inserts;
* Development of reliable and reproducible laser patterning/texturing of large 3D surfaces on replication inserts by equipping existing multi-axis laser machining platforms with necessary CAD/CAM and system-level solutions for integrating them into tool-making process chains;
* Adaptation and development of further metal surface engineering technologies to ensure (i) the durability of the mould inserts and (ii) efficient demoulding behaviour;
* Synergistical combination of the latest advances in laser surface patterning/texturing and metal surface engineering to address open issues concerning the durability, yield and cost in produce functional surface structures/features on replication inserts for serial manufacture of thermoplastic parts;
* Provision of micron /sub-micron structured mould inserts for testing.
**Deliverables focused on data** **Milestones focused on data**
Reports related to end user Consistency between endspecifications users
requirements and
envisioned platform
capabilities
Reports related to • 3D surface patterning
* Surface engineering technologies validated technologies for producing • System-level tools for patterned/textured large 3D surfaces
replication inserts
patterning integrated in
* Laser-based technologies
GFMS equipment
for producing mould inserts with patterned/textured functional surfaces
* System-level tools for laser processing large 3D surfaces
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
WP3
</td>
<td>
•
•
• •
•
</td>
<td>
Tests of the applicability of HIMALAIA laser patterned replication inserts for
injection moulding
Delivery of optimised processing routes for low cost, repeatable manufacturing
of 3D functional surfaces in thermoplastics
Recording of process data via an advanced data collection system Monitoring
production process quality and define the relationship between the process
environment, pattern replication quality and pattern physical functionalities
in tandem with WP4
Informing the strategy for the proposed industrial demonstrator manufacturing
processes in WP5
</td>
<td>
Reposts based on
* Multifunction tool and planar insert design
* Material selection
* Replication process
* Optimised injection moulding-based replication parameters
* Simulated data
* Simulations for replication efficiency
</td>
<td>
•
•
</td>
<td>
Availability of the process building blocks for injection mouldingbased
replication of 3D and large surfaces Availability of simulation data
</td> </tr>
<tr>
<td>
WP4
</td>
<td>
• •
•
•
</td>
<td>
Quantify the functional surface behaviours
Relate the effectiveness of the functionalities with the surface metrology
parameters
Build pattern classifiers (models) for discriminating ‘acceptable’ from ‘non
acceptable’ surfaces according to their functional
properties and pre-defined tolerance levels
Develop an injection moulding flow solver subroutine to predict the
effectiveness of replication and corresponding surface functionality for user
defined input process conditions
</td>
<td>
Reports based on :
* Developed testing methods
* In-line metrology for 3D patterned / textured large surfaces
* Nano-safety by design
* Model-based inspection methods for linking functionality with surface parameters
* User subroutine and documentation for prediction of surface functionality
</td>
<td>
•
•
</td>
<td>
Functional characterisation methods for 3D patterned / textured surfaces are
validated In-line metrology tool is available with the targeted performances
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
WP5
</td>
<td>
•
•
•
•
</td>
<td>
Define the layout of the manufacturing platform and the interoperability
requirements between the different building blocks
Physically implement the technologies and tools developed in the previous WPs
on the manufacturing platform
Enable efficient data communication between the implemented technologies and
tools in order to achieve an optimal process control
Develop and implement Zero Defect Strategies on the platform
</td>
<td>
Reports based on :
* Platform layout
* Platform with integrated hardware
* Platform automation and process control
* Generic validation procedure
* Zero Defect strategy
* Best Practices for IM-based Platform
* New paradigms and ICT tools for HIMALAIA’s technological integration with scenario for broad industrial implementation
</td>
<td>
•
•
</td>
<td>
Fully integrated platform
Generic validation procedure defined
</td> </tr>
<tr>
<td>
WP6
</td>
<td>
•
•
</td>
<td>
Demonstrate the production of large and/or 3D parts on the HIMALAIA
manufacturing platform
Evaluate all demonstrators via both technological and economical assessments.
</td>
<td>
Reports based on :
* Life Cycle Assessment
* Total Costs and Benefits of Ownership
* Automotive in-cabin components demonstrator production
* Orthodontics demonstrator production
* Cosmetic tubes - packaging demonstrator production
* HIMALAIA platform and validation of the demonstrators
</td>
<td>
•
</td>
<td>
Demonstrators available for validation and creation of data
</td> </tr> </table>
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1315_Q-SORT_766970.md
|
# EXECUTIVE SUMMARY
The Europe 2020 strategy for a smart, sustainable, and inclusive economy
underlines the central role of knowledge and innovation in generating growth.
For this reason, the European Union strives to improve access to scientific
information and to boost the benefits of public investment in the research
funded under the EU Framework Programme for Research and Innovation Horizon
2020 (2014-2020).
According to this strategy, in Horizon 2020 a limited pilot action on open
access to research data has been implemented so that participating projects
will be required to develop an Open Data Management Plan (ODMP), in which they
will specify what data will be open.
This deliverable of the Q-SORT project is prepared as part of WP1, Open Data
Management Plan (1st version). In this document we initiate the discussion of
management, life cycle, processes of data generated by Q-SORT in order to make
such data findable, accessible, interoperable, and reusable (FAIR).
The aim of this document is to provide an analysis of the main elements of the
data management policy that will be used with regard to all the data sets that
will be generated by the project.
In particular, the deliverable outlines how research data will be handled
during the lifetime of Q-SORT and describes what data will be collected,
processed or generated and following what methodology, whether and how this
data will be shared and/or made open, and how it will be curated and
preserved.
This data management plan is intended as a dynamic document that will be
edited and updated during the project.
## 1 INTRODUCTION
### 1.1 OPEN DATA AND OPEN ACCESS IN H2020 AND IN Q-SORT
In December 2013, the European Commission announced their commitment to open
data through the Pilot on Open Research Data, as part of the Horizon 2020
Research and Innovation Programme 1 . The Pilot’s aim is to “ _improve and
maximise access to and re-use of research data generated by projects for the
benefit of society and the economy_ ”.
In the frame of the Pilot on Open Research Data, results of publicly-funded
research should be disseminated more broadly and faster, for the benefit of
researchers, innovative industry and citizens 2 .
On one hand, Open Access accelerates discovery processes and makes it easier
for research results to reach the market (thus meaning higher returns on
public investment), and also avoids duplication of research efforts, thus
leading to a better use of public resources. On the other hand, this Open
Access policy is also beneficial for the researchers themselves. Making the
research publicly available increases the visibility of the performed
research, which translates into a higher number of citations 3 as well as an
increase in the collaboration potential with other institutions in new
projects, among others. Additionally, Open Access offers small and medium-
sized enterprises (SMEs) access to the latest research for utilisation.
Under H2020, each beneficiary must ensure open access to all peer-reviewed
scientific publications relating to its results. These open access
requirements are based on a balanced support to both 'Green
open access' (immediate or delayed open access that is provided through self-
archiving) and 'Gold open access' (immediate open access that is provided by a
publisher).
Apart from open access to publications, projects must also aim to deposit the
research data needed to validate the results presented in the deposited
scientific publications, known as "underlying data". In order to effectively
supply this data, projects need to consider at an early stage how they are
going to manage
and share the data they create or generate. In addition, beneficiaries must
ensure their research data are findable, accessible, interoperable and
reusable (FAIR) 4 .
In this document, we will introduce the first version of the Open Data
Management Plan (ODMP) elaborated for the Q-SORT project. The ODMP describes
how to select, structure, store, and make public the information used or
generated during the project, both considering scientific publications as well
as generated research data. The Q-SORT ODMP follows the ODMP structure given
by the ODMPonline tool 5 as suggested in the EC ODMP template.
We anticipate here that Q-SORT, as a best practice, will make use of the
ZENODO 6 repository (an OpenAIR 7 and CERN collaboration).
The reasons to use this repository are the following:
* it allows researchers to deposit both publications and data, while providing tools to link them;
* in order to increase visibility and impact of the project the Q-SORT Community has been created in ZENODO, so all beneficiaries of the project can link the uploaded paper to the Community;
* the repository has backup and archiving capabilities;
* ZENODO assigns all publicly available uploads a unique Digital Object Identifier (DOI) for citation;
* it makes the upload easy;
* the repository allows different access rights.
12 not.” Piwowar H. and Vision T.J 2013 "Data reuse and the open data
citation advantage“ https://peerj.com/preprints/1.pdf 3
http://ec.europa.eu/research/participants/data/ref/h2020/grants_manual/hi/oa_pilot/h2020-hi-
oa-pilot-
guide_en.pdfhttp://ec.europa.eu/research/participants/data/ref/h2020/grants_manual/hi/oa_pilot/h2020-hi-
oa-data-mgt_en.pdf“There is evidence that studies that make their data
available do indeed receive more citations than similar studies that do
4
http://ec.europa.eu/research/participants/data/ref/h2020/grants_manual/hi/oa_pilot/h2020-hi-
oa-data-mgt_en.pdf
5 6 https://zenodo.org https://dmponline.dcc.ac.uk/
7 https://www.openaire.eu
_Figure 1 - Screenshot of the Zenodo repository_
This ODMP will be updated during the project lifetime.
**1.2 OBJECTIVES OF THE Q-SORT PROJECT**
Q-SORT introduces a revolutionary concept whereby the transmission electron
microscope (TEM) is employed as a so-called Quantum Sorter, i.e. a device that
is able to pick out and display detailed information about electron quantum
states. This in turn provides researchers with precious new information about
the sample being examined.
The project -which includes applications in physics, biology, and
biochemistry- is expected to have a wide-ranging impact due to the ubiquitous
adoption of TEM and STEM across many disciplines. Indeed, strong
interdisciplinarity, featuring a multi-year collaboration between physicists
and biologists, is one of Q-SORT’s defining traits. The project features a
strong international consortium with potential industrial applications.
Q-SORT also has foundational value in physics as it fosters its own kind of
sparse-sensing approach to TEM, advancing the field in the direction of
quantum measurement. Intuitively, sparse sensing is analogous to how we
recognise familiar people from just a few small details: it means that only a
few measurements are taken compared to traditional approaches – yet these are
still sufficient to extract all the relevant information. A similar thing
happens when we recognise relatives just from their silhouette or profile or
any other small detail: we don’t need to see their full face to identify them.
The scientific and technical results of the Q-SORT project are expected to be
of maximum interest for the scientific community.
### 2\. Q-SORT DATA TYPES
As previously noted, the Q-SORT project is an H2020 Research and Innovation
Action, and hence will generate scientific data - however the current plan
addresses also the data produced as a result of the Communication and Public
Engagement activities of the project.
**2.1 WHAT IS THE PURPOSE OF THE DATA COLLECTION/GENERATION AND ITS RELATION
TO THE OBJECTIVES**
**OF THE PROJECT?**
The stated Q-SORT project breakthroughs and objectives are:
**Breakthroughs:**
A novel ‘quantum toolbox’ will be developed to enable the following
breakthroughs in advanced characterisation and analysis:
* B1. A general theoretical framework to tailor the Quantum Sorter to a basis of choice.
* B2. The application of the Quantum Sorter to the measurement of atomically-resolved magnetic dichroism.
* B3. Characterisation of the dispersion relation between plasmon energy and orbital angular momentum (OAM) in spherical nanoscale objects1 (fullerenes and Au nanoparticles).
* B4. A proof-of-concept application of the Quantum Sorter for the direct identification of individual proteins and their orientations using at least one order of magnitude fewer electrons .
**Objectives:**
* To design and fabricate novel electromagnetic (e.-m.) phase-shifting elements that are capable of dynamically controlling the sorting of the wavefunction of the electron beam according to its OAM (T2.3, T2.4, T2.5).
* To develop a theoretical framework of physically sparse sensing via optimised basis change (T3.1,
T3.2). To develop holograms and tunable phase modulators based on e.-m.
elements (T3.3,
T3.4).
* To study nanoscale magnetic properties with improved signal to noise over present technology (T4.2, T4.4).
* To study for the first time the OAM signatures of plasmonic excitations in chosen nanostructures (T4.1, T4.3).
* To develop ‘safe’ (low-dose or ‘minimally destructive’) measurement techniques aimed at revealing the orientations of single proteins, as well as the characteristic symmetries and motifs of molecules (WP5).
Data collection and generation related to each of the above serves to enable
the realisation, effective achievement, and evaluation of the Q-SORT
breakthroughs and to effect scientific reporting of research and innovation
that will result in each of the objectives.
**2.2 WHAT TYPES AND FORMATS OF DATA WILL THE PROJECT GENERATE/COLLECT?**
The project will generate and collect the following types and formats of data:
* The experimental data, incapsulating associated metadata, needed to validate the results presented in scientific publications (underlying data).
More specifically, .dm3 files are generated by the TEM machines in Q-SORT.
This is a preprietary format from Gatan, which however is quite well
documented. Dm3 has its own metadata embedded in the files, which usually
consists of: microscope name, date, if necessary user name and technical data
such as calibration parametres. We have code to extract this metatdata if it
is requested. The .dm3 file format can be used to store spectrum (1D), image
(2D), and spectral imaging (2D x 1D) information. Simulation data can be also
transformed to .dm3 by using the STEM_CELL software.
* The Dissemination and Public Engagement data (webinars, videos, print material etc). These consist of common graphic-asset containers such as .mp4 and .pdf
**2.3 WILL YOU RE-USE ANY EXISTING DATA AND HOW?**
Q-SORT will not be re-using any existing data.
**2.4 WHAT IS THE ORIGIN OF THE DATA?**
The experimental data originates from TEM experiments, part of the original
research which is at the core of Q-SORT.
**2.5 WHAT IS THE EXPECTED SIZE OF THE DATA?**
The expected amount of experimental data generated varies depending on whether
experiments are running or not and on how many trials are required. A rough
estimate of the experimental data produced during the project is 5 GB
(gigabytes).
**2.6 TO WHOM MIGHT IT BE USEFUL ('DATA UTILITY')?**
The Q-SORT project data might be useful:
* to any researcher willing to retry the analysis routines or to test against their own results;
* to academics and university departments and institutes that could use the Q-SORT data for scholarship and teaching purposes;
* to the private sector (industry) for the same above two reasons.
#### 3 MAKING DATA FAIR
Q-SORT partner CNR will regularly upload the above-described experimental data
on **Zenodo** ** 8 ** . 12 months of embargo between data production and
data upload into Zenodo will be enforced in order to allow researchers
sufficient time to elaborate and study experimental data, to re-run
experiments, to write and publish articles based on such data before the
latter is shared publicly.
Dissemination and Public Engagement data (logos, videos, graphic assets, etc.)
will also be made available to download from Zenodo or from the Q-SORT website
(www.qsort.eu).
**3.1 FINDABLE, ACCESSIBLE, INTEROPERABLE, REUSABLE DATA**
Zenodo embraces the FAIR Principles as defined in _Wilkinson, M. D. et al.
The FAIR Guiding Principles9 for scientific data management and stewardship._
_Sci. Data 3:160018 doi:_ _10.1038/sdata.2016.18_ _(2016) :_
_“_ **To be Findable:**
* **F1** : (meta)data are assigned a globally unique and persistent identifier ○ A DOI is issued to every published record on Zenodo.
* **F2** : data are described with rich metadata (defined by R1 below)
1. Zenodo's metadata is compliant with DataCite's Metadata Schema minimum and recommended terms, with a few additional enrichements.
* **F3** : metadata clearly and explicitly include the identifier of the data it describes ○ The DOI is a top-level and a mandatory field in the metadata of each record.
* **F4** : (meta)data are registered or indexed in a searchable resource
1. Metadata of each record is indexed and searchable directly in Zenodo's search engine immediately after publishing.
○ Metadata of each record is sent to DataCite servers during DOI registration
and indexed there.
**To be Accessible:**
* **A1** : (meta)data are retrievable by their identifier using a standardized communications protocol
8. http://about.zenodo.org
9. http://about.zenodo.org/principles/
○ Metadata for individual records as well as record collections are
harvestable using the OAI-PMH protocol by the record identifier and the
collection name.
○ Metadata is also retrievable through the public REST API.
● **A1.1** : the protocol is open, free, and universally implementable
○ See point A1. OAI-PMH and REST are open, free and universal protocols for
information retrieval on the web.
● **A1.2** : the protocol allows for an authentication and authorization
procedure, where necessary
○ Metadata are publicly accessible and licensed under public domain. No
authorization is ever necessary to retrieve it.
● **A2** : metadata are accessible, even when the data are no longer
available
○ Data and metadata will be retained for the lifetime of the repository. This
is currently the lifetime of the host laboratory CERN, which currently has an
experimental programme defined for the next 20 years at least.
○ Metadata are stored in high-availability database servers at CERN, which are
separate to the data itself.
**To be Interoperable:**
● **I1** : (meta)data use a formal, accessible, shared, and broadly
applicable language for knowledge representation.
○ Zenodo uses JSON Schema as internal representation of metadata and offers
export to other popular formats such as Dublin Core or MARCXML.
● **I2** : (meta)data use vocabularies that follow FAIR principles
○ For certain terms we refer to open, external vocabularies, e.g.: license
(Open Definition), funders (FundRef) and grants (OpenAIRE).
● **I3** : (meta)data include qualified references to other (meta)data
○ Each referrenced external piece of metadata is qualified by a resolvable
URL.
**To be Reusable:**
● **R1** : (meta)data are richly described with a plurality of accurate and
relevant attributes
○ Each record contains a minimum of DataCite's mandatory terms, with
optionally additional DataCite recommended terms and Zenodo's enrichments.
● **R1.1** : (meta)data are released with a clear and accessible data usage
license
○ License is one of the mandatory terms in Zenodo's metadata, and is referring
to a Open Definition license.
○ Data downloaded by the users is subject to the license specified in the
metadata by the uploader.
● **R1.2** : (meta)data are associated with detailed provenance
○ All data and metadata uploaded is tracable to a registered Zenodo user.
○ Metadata can optionally describe the original authors of the published work.
● **R1.3** : (meta)data meet domain-relevant community standards
○ Zenodo is not a domain-specific repository, yet through compliance with
DataCite's Metadata Schema, metadata meets one of the broadest cross-domain
standards available.
**3.2 HOW Q-SORT WILL PROMOTE DATA SHARING AND RE-USE**
Answers to the following questions help to achieve FAIR data.
**3.2.1 How will the data be licensed to permit the widest re-use possible?**
Q-SORT will share data using Creative Commons licenses (see
_https://creativecommons.org/licenses/_ ). More specifically:
* experimental data will be shared under the Attribution-NonCommercial-ShareAlike CC license (CC BY-NC-SA);
* Dissemination and Public Engagement data will be shared under the
Attribution-NonCommercial-NoDerivs CC license (CC BY-NC-ND).
**3.2.2 When will the data be made available for re-use?**
12 months of embargo between data production and data upload into Zenodo will
be enforced in order to allow researchers sufficient time
* to elaborate and study experimental data
* to re-run experiments if necessary
* to write and publish articles based on such data before the latter is shared it publicly. This allocation of time is realistic since it takes into account machine downtime and availability, as well as experiment duration.
**3.2.3 Are the data produced and/or used in the project useable by third
parties, in particular after the end of the project?**
Yes. Data generated by Q-SORT can be used by third parties to verify
independently the analysis routines or to test against their own results. It
could also, conceivably, be used in aggregate for big data meta-analysis of
TEM studies.
**3.2.4 How long is it intended that the data remains re-usable?**
For as long as Zenodo exists.
**3.3 DATA PROTECTION**
Q-SORT data protection aspects will be coordinated according to the
recommendations of the relevant national data protection authorities. The
project is aware, and will work towards, upcoming European data protection
rules that will enter into force May 2018 and their impact will be considered:
_http://ec.europa.eu/justice/data-protection/reform/index_en.htm_
H2020 open access policy pursues that the information generated by the
projects participating in that programme is made publicly available. However,
as stated in EC guidelines on Data Management in
H2020 10
In line with this, Q-SORT shall decide which data is made public according to
aspects such as potential liabilities to commercialisation of technologies,
related IPR protection (by patents or other forms of protection), risks with
respect to achieving project objectives/outcomes, etc.
Q-SORT entails pioneering research that is of key importance to physics,
biology/biochemistry, quantum information manipulation, materials science,
optics. Effective exploitation of Q-SORT research results depends on the
proper management of intellectual property. To this end, the Q-SORT Consortium
will adopt the following strategy (Figure 1):
10 _EC document: “Guidelines on Data Management in Horizon 2020” – versión
1.0 – 11 December, 2013_
_Figure 1: Process for determining which information is to be made public
(from EC’s document “Guidelines on Open Access to Scientific Publications and
Research Data in Horizon 2020 – v1.0 – 11 December 2013”)_
if the research findings result in innovation or in significant new insights,
the members of the consortium will consider two alternatives regarding
protection, i.e.:
○ to withhold the data for internal use or to apply for a patent in order to
commercially exploit the invention and have in return financial gain; in the
latter case, related publications will be therefore delayed until the patent
filing;
○ if said developments are not going to be withheld or patented, to publish
results for knowledge-sharing purposes.
All intended dissemination or protection policies and actions are regulated by
the Q-SORT Consortium
Agreement. Once the relevant protections (e.g. IPR) are secured, Q-SORT
partners may disseminate (subject to their legitimate interests) the obtained
results and knowledge to the relevant scientific communities through
contributions in journals and international conferences in the fields of
physics, biochemistry, materials science, etc.
**3.4 OPEN ACCESS PUBLICATIONS**
The first aspect to be considered in the ODMP is related to open access (OA)
to the publications generated within the Q-SORT project, meaning that any
peer-reviewed scientific publication made within the context of the project
will be available online to any user at no charge. This aspect is mandatory
for new projects in the Horizon 2020 programme (article 29.2 of the Model
Grant Agreement).
The two alternatives recommended by the EC to comply with this requirement
are:
* Self-archiving / ‘green’ OA: In this option, the beneficiaries deposit the final peer-reviewed manuscript in a repository of their choice. In this case, they must ensure open access to the publication within a maximum of six months (twelve months for publications in the social sciences and humanities).
* Open access publishing / ‘gold’ OA: In this option, researchers publish their results in open access journals, or in journals that sell subscriptions and also offer the possibility of making individual articles openly accessible via the payment of author processing charges (APCs) (hybrid journals). Again, open access via the chosen repository must be ensured upon publication.
Publications arising from the Q-SORT project will be made public preferably
through the option of ‘gold’ OA in order to provide the widest dissemination
of the published results through the own webpages of the publishers. In other
cases, the scientific publications will be deposited in a repository (‘green’
OA). Most publishers allow to deposit a copy of the article in a repository,
sometimes with a period of restricted access (embargo) 1 . In Horizon 2020,
the embargo period imposed by the publisher must be shorter than 6 months (or
12 months for social sciences and humanities). This embargo period will be
therefore taken into account by the Q-SORT consortium to choose the open
access modality for the fulfilment of the open access obligations established
by the EC.
Additionally, according to the EC recommendation, whenever possible the Q-SORT
consortium will retain the ownership of the copyright for their work through
the use of a ‘License to Publish’, which is a publishing agreement between
author and publisher. With this agreement, authors can retain copyright and
the right to deposit the article in an Open Access repository, while providing
the publisher with the necessary rights to publish the article. Additionally,
to ensure that others can be granted further rights for the use and reuse
the work, the Q-SORT consortium may ask the publisher to release the paper
under a Creative Commons license, preferably CC-0 or CC-BY .
Besides these two facts (retaining the ownership of the publication and
effecting an embargo period), the Q-SORT consortium will also consider the
relevance of the journal where the paper is intended to be published. Q-SORT
aims and is expected to publish results in high impact-factor journals.
Therefore, also this aspect will be taken into consideration when selecting
the journals in which to publish the Q-SORT project results.
The following is a list of the journals which will be considered for the
Q-SORT publications, with information about the open access policy of each
journal.
<table>
<tr>
<th>
**Publisher**
</th>
<th>
**Link**
</th>
<th>
**Comments about open access**
</th> </tr>
<tr>
<td>
AIP
</td>
<td>
https://publishing.aip.org/librarians/op
en-access-policy
</td>
<td>
A paid open access option is available for this journal. If funding rules
apply, publishers version/PDF may be used on author's personal website,
institutional website or institutional repository
</td> </tr>
<tr>
<td>
APS
</td>
<td>
_https://journals.aps.org/prl/edannounc e/PhysRevLett.101.140001_
</td>
<td>
The APS gives rights to its authors to use their articles as they wish. The
APS has allowed authors the right to publish the APS-prepared, nal, and
denitive version of the article on their web site or on the authors’
institution’s web site, immediately upon publication. The author’s nal
version could also be put onto e-print servers such as the arXiv. Authors and
their institutions could make copies of their articles for classroom use, and
others could copy the article for noncommercial use. They recommend that if
authors wish to post a complete article from an APS journal, they instead
provide a link to our site, or to a free copy of the article on their personal
web sites.
</td> </tr> </table>
<table>
<tr>
<th>
Science
</th>
<th>
_http://www.sciencemag.org/authors/sc ience-journals-editorial-policies_
</th>
<th>
Immediately after publication, authors may post the accepted version of the
paper on their personal or institutional archival website. In addition, one
author is provided a "referrer" link, which can be posted on a personal or
institutional web page and through which users can freely access the final,
published paper on the Science Journal’s website.
For research papers created under grants for which the authors are required by
their funding agencies to make their research results publicly available (for
example, from NIH, Howard Hughes Medical Institute, or Wellcome Trust),
Science allows posting of the accepted version of research content (Research
Articles and Reports) to the funding body's archive or designated repository
(such as PubMed Central) no sooner than six months after publication, provided
that a link to the final version of the paper published in the Science Journal
is included. The accepted version is the version of the paper accepted for
publication after changes resulting from peer review, but before editing by
the Science Journal copyediting staff, image quality control, and production
of the final PDF.
Authors from institutions that might limit the authors' ability to grant to
AAAS any of the rights described in AAAS's license must obtain an approved
waiver from their institution to publish with the Science Journal.
Original research papers are freely accessible with registration on the
Science Journal’s website 12 months after publication.
</th> </tr>
<tr>
<td>
Nature
</td>
<td>
_https://www.nature.com/authors/polici es/license.html#Self_archiving_policy_
</td>
<td>
<table>
<tr>
<th>
_**Nature Research's policies are compatible with the** _
</th> </tr>
<tr>
<td>
_**vast majority of funders' open access and** _
</td> </tr>
<tr>
<td>
_**self-archiving mandates.** _
</td> </tr>
<tr>
<td>
More information is available on the _SHERPA/ROMEO_
</td> </tr>
<tr>
<td>
_website_ . Nature Research actively supports the
</td> </tr>
<tr>
<td>
self-archiving process, and continually works with authors,
</td> </tr>
<tr>
<td>
readers, subscribers and site-license holders to develop
</td> </tr>
<tr>
<td>
its policy.
</td> </tr> </table>
Preprints
<table>
<tr>
<th>
Nature Research journals support posting of primary
</th> </tr>
<tr>
<td>
research manuscripts on community preprint servers such
</td> </tr>
<tr>
<td>
as _arXiv_ and _bioRxiv_ . Preprint posting is not considered
</td> </tr>
<tr>
<td>
prior publication and will not jeopardize consideration at
</td> </tr>
<tr>
<td>
Nature Research journals. Preprints will not be
</td> </tr>
<tr>
<td>
considered when determining the conceptual advance
</td> </tr>
<tr>
<td>
provided by a study under consideration at Nature
</td> </tr>
<tr>
<td>
Research. Authors posting preprints are asked to respect
</td> </tr>
<tr>
<td>
our policy on communications with the media
</td> </tr>
<tr>
<td>
( _http://www.nature.com/authors/policies/embargo.html_ ).
</td> </tr>
<tr>
<td>
Our policy on posting and citation of preprints of primary
</td> </tr>
<tr>
<td>
research manuscripts is summarized below:
</td> </tr> </table>
● The original submitted version of the
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
manuscript (the version that has not undergone peer review) may be posted at
any time. Authors should disclose details of preprint posting, including DOI,
upon submission of the manuscript to a Nature Research journal.
* For subscription journals, the Author’s Accepted Manuscript (authors’ accepted version of the manuscript) of the manuscript may only be posted 6 months after the paper is published, consistent with our self-archiving embargo
( _http://www.nature.com/authors/policies/lice nse.html_ ). Please note that
the Author’s Accepted Manuscript may not be released under a Creative Commons
license. For Nature Research’s Terms of Reuse of archived manuscripts please
see:
_http://www.nature.com/authors/policies/lice nse.html#terms_
* For subscription journals, the published PDF must not be posted on a preprint server or any other website. However, authors are encouraged to obtain a free SharedIt link of their paper, which can be posted online and allows read-only access. SharedIt links can be obtained by submitting the published article DOI at _http://authors.springernature.com/share_
* Preprints may be cited in the reference list as below:
* babichev, S.A., Ries, J. Lvovsky, A.I. Quantum scissors: teleportation of single-mode optical states by means of a nonlocal single photon. Preprint at http://arXiv.org/quant-ph/0208066 (2002).
**Author's Accepted Manuscript**
<table>
<tr>
<th>
When a research paper is accepted for publication in an
</th> </tr>
<tr>
<td>
Nature Research journal, authors are encouraged to
</td> </tr>
<tr>
<td>
submit the Author's Accepted Manuscript to
</td> </tr>
<tr>
<td>
PubMedCentral or other appropriate funding body's
</td> </tr>
<tr>
<td>
archive, for public release six months after first publication.
</td> </tr>
<tr>
<td>
In addition, authors are encouraged to archive this version
</td> </tr>
<tr>
<td>
of the manuscript in their institution's repositories and, if
</td> </tr>
<tr>
<td>
they wish, on their personal websites, also six months
</td> </tr>
<tr>
<td>
after the original publication. Authors should cite the
</td> </tr>
<tr>
<td>
publication reference and _DOI number_ on the first page of
</td> </tr>
<tr>
<td>
any deposited version, and provide a link from it to the
</td> </tr>
<tr>
<td>
URL of the published article on the journal's website.
</td> </tr>
<tr>
<td>
Where journals publish content online ahead of publication
</td> </tr>
<tr>
<td>
in a print issue (known as advanced online publication, or
</td> </tr>
<tr>
<td>
AOP), authors may make the archived version openly
</td> </tr>
<tr>
<td>
available six months after first online publication (AOP).
</td> </tr> </table>
**Open access content**
</th> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
<table>
<tr>
<th>
For open access content published under a Creative
</th> </tr>
<tr>
<td>
Commons licence, the published version can be deposited
</td> </tr>
<tr>
<td>
immediately on publication, alongside a link to the URL of
</td> </tr>
<tr>
<td>
the published article on the journal's website.
</td> </tr>
<tr>
<td>
In all cases, the requirement to link to the journal's website
</td> </tr>
<tr>
<td>
is designed to protect the integrity and authenticity of the
</td> </tr>
<tr>
<td>
scientific record, with the online published version on
</td> </tr>
<tr>
<td>
nature.com clearly identified as the definitive version of
</td> </tr>
<tr>
<td>
record.
</td> </tr> </table>
**Manuscript deposition service**
<table>
<tr>
<th>
To facilitate self-archiving of **original research papers**
</th> </tr>
<tr>
<td>
and help authors fulfil funder and institutional mandates,
</td> </tr>
<tr>
<td>
Nature Research deposits manuscripts in PubMed
</td> </tr>
<tr>
<td>
Central, Europe PubMed Central and PubMed Central
</td> </tr>
<tr>
<td>
Canada on behalf of authors who opt-in to this free service
</td> </tr>
<tr>
<td>
during submission. (This service does not apply to
</td> </tr>
<tr>
<td>
Reviews or Protocols.)
</td> </tr>
<tr>
<td>
More information on the _Nature Research's Manuscript_
</td> </tr>
<tr>
<td>
_Deposition Service_ is available. To take advantage of this
</td> </tr>
<tr>
<td>
service, the corresponding author must opt-in during the
</td> </tr>
<tr>
<td>
manuscript submission process. Corresponding authors
</td> </tr>
<tr>
<td>
should be mindful of all co-authors’ self-archiving
</td> </tr>
<tr>
<td>
requirements.
</td> </tr> </table>
</th> </tr>
<tr>
<td>
Elsevier
</td>
<td>
_https://www.elsevier.com/about/our-b usiness/policies/sharing_
</td>
<td>
**Article Sharing**
Authors who publish in Elsevier journals can share their research by posting a
free draft copy of their article to a repository or website. Researchers who
have subscribed access to articles published by Elsevier can share too. There
are some simple guidelines to follow, which vary depending on the article
version you wish to share. Elsevier is a signatory to the STM Voluntary
Principles for article sharing on Scholarly Collaboration Networks and a
member of the Coalition for Responsible Sharing.
_Preprint_
* Authors can share their preprint anywhere at any time.
* If accepted for publication, we encourage authors to link from the preprint to their formal publication via its Digital Object Identifier (DOI). Millions of researchers have access to the formal publications on ScienceDirect, and so links will help your users to find, access, cite, and use the best available version.
* Authors can update their preprints on arXiv or RePEc with their accepted manuscript .
_Accepted Manuscript_
Authors can share their accepted manuscript:
**Immediately**
* via their non-commercial personal homepage or blog
* by updating a preprint in arXiv or RePEc with the accepted manuscript
* via their research institute or institutional repository for internal institutional uses or as part of an invitation-only research collaboration work-group
* directly by providing copies to their students or to
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
research collaborators for their personal use
* for private scholarly sharing as part of an invitation-only work group on commercial sites with
which Elsevier has an agreement
**After the embargo period**
* via non-commercial hosting platforms such as their institutional repository
* via commercial sites with which Elsevier has an agreement
**In all cases accepted manuscripts should:**
* link to the formal publication via its DOI
* bear a CC-BY-NC-ND license – this is easy to do, click here to find out how
* if aggregated with other manuscripts, for example in a repository or other site, be shared in alignment with our hosting policy
* not be added to or enhanced in any way to appear more like, or to substitute for, the published journal article
_Published Journal Article_
Policies for sharing published journal articles differ for subscription and
gold open access articles:
**Subscription articles**
* If you are an author, please share a link to your article rather than the full-text. Millions of researchers have access to the formal publications on ScienceDirect, and so links will help your users to find, access, cite, and use the best available version
* If you are an author, you may also share your Published Journal Article privately with known students or colleagues for their personal use
* Theses and dissertations which contain embedded PJAs as part of the formal submission can be posted publicly by the awarding institution with DOI
links back to the formal publications on
ScienceDirect
* If you are affiliated with a library that subscribes to ScienceDirect you have additional private sharing rights for others’ research accessed under that agreement. This includes use for classroom teaching and internal training at the institution (including use in course packs and courseware programs), and inclusion of the article for grant funding purposes
* Otherwise sharing is by agreement only
**Gold open access articles**
* May be shared according to the author-selected end-user license and should contain a CrossMark logo, the end user license, and a DOI link to the formal publication on ScienceDirect.
</th> </tr>
<tr>
<td>
PNAS
</td>
<td>
_http://www.pnas.org/page/subscription s/open-access_
</td>
<td>
**Open Access**
Corresponding authors from institutions with current-year site licenses will
receive a discounted open access fee of **$1,100,** compared to our regular
fee o **f $1,450** , to make their articles immediately free online.
PNAS satisfies Green Open Access requirements and is compliant with funders
worldwide (e.g., NIH, HHMI, the Medical Research Council, the Wellcome Trust),
although
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
some funders only allow their funds to be used for PNAS page charges, not the
open access fee.
Beginning with papers submitted September 2017, open access articles are
published under a nonexclusive License to Publish and distributed under a
Creative Commons Attribution-NonCommercial-NoDerivatives (CC BY-NC-ND)
license. If your funder will not allow you to publish at all in
PNAS under this license (e.g., Bill and Melinda Gates Foundation), please
contact PNAS for a funder-compliant option.
**Authors**
Authors of accepted manuscripts who are interested in publishing their article
open access should confirm their subscription status with their institutional
librarian. If their institution has a site license, the author should note the
reduced fee ($1,100) on the PNAS billing forms included with the author
proofs.
**PNAS and NIH Public Access**
PNAS complies with the NIH Public Access Policy and extends access even
further. PNAS automatically deposits the final, published version of all its
content, regardless of funding, in PubMed Central (PMC) and makes it free at
both PMC and PNAS within 6 months of publication.
Authors are not required to deposit their manuscripts in PMC because PNAS
will automatically deposit the final published version for public release.
Authors who wish to deposit a manuscript in PMC must give PMC a release date
of 6 months after print publication in PNAS.
When citing a paper in NIH applications, proposals, and progress reports that
arose from an NIH award, authors must include the PubMed Central reference
number (PMCID). If you publish in a journal such as PNAS that makes the final
published version of your paper available in PMC, a PMCID may not be assigned
until several weeks after publication. During this time, please indicate
compliance with the policy by listed in the lower right corner of the
AbstractPlus view of PubMed. If the paper is not yet publicly available on
PMC, the abstract view will also list the date the article will become
available.
</th> </tr>
<tr>
<td>
OSA
</td>
<td>
_https://www.osapublishing.org/submit/ review/copyright_permissions.cfm_
</td>
<td>
**Open Access Licenses**
**Open Access Publishing Agreement**
OSA's "Copyright Transfer and Open Access Publishing Agreement" (OAPA) is the
default option for most authors when publishing in one of our fully open
access journals or when opting for open access in our hybrid journals. All
articles published under our OAPA are freely accessible, while copyright is
transferred to OSA. Authors may post the
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
published version of their article to their personal website, institutional
repository, or a repository required by their funding agency. Authors and
readers may use, reuse, and build upon the article, or use it for text or data
mining, as long as the purpose is non-commercial and appropriate attribution
is maintained.
**Creative Commons Licensing**
OSA is aware that some authors, as a condition of their funding, must publish
their work under a Creative Commons license. We therefore offer a CC BY
license for authors who indicate that their work is funded by agencies that
we have confirmed have this requirement. Authors must enter their funder(s)
during the manuscript submission process. At that point, if appropriate, the
CC BY license option will be available to select for an additional fee. Any
subsequent reuse or distribution of content licensed under CC BY must maintain
attribution to the author(s) and the published article's title, journal
citation, and DOI.
Questions regarding Creative Commons, Copyright Transfer, or Open Access
Licensing can be directed to [email protected].
**Author and End-User Reuse Policy**
Transfer of copyright does not prevent an author from subsequently reproducing
his or her article. OSA's Copyright Transfer Agreement, OAPA, and the CC BY
license give authors and others the right to reuse the author's Accepted
Manuscript (AM) or final publisher Version of Record (VoR) of the article or
chapter as follows:
**Attribution**
**Non-open-access articles**
If an author chooses to post a non-open-access article published under the OSA
Copyright Transfer Agreement on his or her own website, in a closed
institutional repository or on the arXiv site, the following message must be
displayed at some prominent place near the article and must include a working
hyperlink to the online abstract in the OSA Journal:
[© XXXX [year] Optical Society of America]. One print or electronic copy may
be made for personal use only. Systematic reproduction and distribution,
duplication of any material in this paper for a fee or for commercial
purposes, or modifications of the content of this paper are prohibited.
**Open access articles**
If an author or third party chooses to post an open access article published
under OSA's OAPA on his or her own website, in a repository, on the arXiv
site, or anywhere else, the following message should be displayed at some
prominent place near the article and include a working hyperlink to the online
abstract in the OSA Journal:
[© XXXX [year] Optical Society of America]. Users may use, reuse, and build
upon the article, or use the article for text or data mining, so long as such
uses are for non-commercial purposes and appropriate attribution is
</th> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
maintained. All other rights are reserved.
When adapting or otherwise creating a derivative version of an article
published under OSAs OAPA, users must maintain attribution to the author(s)
and the published article's title, journal citation, and DOI. Users should
also indicate if changes were made and avoid any implication that the author
or OSA endorses the use.
**CC BY licensed articles**
Any subsequent reuse or distribution of content licensed under CC BY must
maintain attribution to the author(s) and the published article's title,
journal citation, and DOI. Users should also indicate if changes were made and
avoid any implication that the author or OSA endorses the use.
</th> </tr> </table>
From this list, we can see that the majority of the journals targeted by
Q-SORT are scientific journals, which allow an open access modality and/or the
author to deposit a post-print version in a repository such as arXiv 2 .
This is in line with the Horizon 2020 requirements.
All the publications will acknowledge the project’s EU funding. This
acknowledgment must be included also in the metadata of the generated
information, since it allows to maximise the discoverability of publications
and to ensure the acknowledgment of EU funding. The terms to be included in
the metadata are:
* "European Union (EU)" and "Horizon 2020"
* the name of the action, the acronym of the project, and the grant number
* the publication date, length of embargo period if applicable, and a persistent identifier (e.g DOI, Handle)
Finally, in the Model Grant Agreement, “scientific publications” mean
primarily journal articles. Whenever possible, Q-SORT will provide access to
other types of scientific publications such as presentations, public
deliverables, etc.
#### 4 ALLOCATION OF RESOURCES
Answers to the following questions help to achieve FAIR data.
**4.1 WHAT ARE THE COSTS FOR MAKING DATA FAIR IN YOUR PROJECT? HOW WILL THESE
BE COVERED?**
**NOTE THAT COSTS RELATED TO OPEN ACCESS TO RESEARCH DATA ARE ELIGIBLE AS PART
OF THE**
**HORIZON 2020 GRANT (IF COMPLIANT WITH THE GRANT AGREEMENT CONDITIONS)**
The FAIR framework has a minimum impact on Q-SORT. The development and
management of the Q-SORT open data/FAIR activities is incorporated into and
budgeted for in the current Work Plan.
Overall data management will be undertaken by CNR.
The resources for long-term preservation are secured via Zenodo 3 and in
detail:
* _**Versions** _ _: Data files are versioned. Records are not versioned. The uploaded data is archived as a Submission Information Package. Derivatives of data files are generated, but original content is never modified. Records can be retracted from public view; however, the data files and record are preserved._
* _**Replicas** _ _: All data files are stored in CERN Data Centres, primarily Geneva, with replicas in Budapest. Data files are kept in multiple replicas in a distributed file system, which is backed up to tape on a nightly basis._
* _**Retention period** _ _: Items will be retained for the lifetime of the repository. This is currently the lifetime of the host laboratory CERN, which currently has an experimental programme defined for the next 20 years at least._
* _**Functional preservation** _ _: Zenodo makes no promises of usability and understandability of deposited objects over time._
* _**File preservation** _ _: Data files and metadata are backed up nightly and replicated into multiple copies in the online system._
* _**Fixity and authenticity** _ _: All data files are stored along with a MD5 checksum of the file content. Files are regularly checked against their checksums to assure that file content remains constant._
* _**Succession plans** _ _: In case of closure of the repository, best efforts will be made to integrate all content into suitable alternative institutional and/or subject based repositories.”_
**4.2 WHO WILL BE RESPONSIBLE FOR DATA MANAGEMENT IN YOUR PROJECT?**
CNR will be responsible as the Project Coordinator.
### 5 OPEN DATA SECURITY
Answers to the following questions help to achieve FAIR data.
**5.1 WHAT PROVISIONS ARE IN PLACE FOR DATA SECURITY (INCLUDING DATA RECOVERY
AS WELL AS SECURE STORAGE AND TRANSFER OF SENSITIVE DATA)?**
Q-SORT stores the processed/parsed results into Zenodo.
Zensures that the servers always have the latest security patches
appliedenodo servers are managed via OpenStack and Puppet configuration14.
management system which
**5.2 IS THE DATA SAFELY STORED IN CERTIFIED REPOSITORIES FOR LONG-TERM
PRESERVATION AND**
**CURATION?**
Zenodo takes security very seriously. Its website reads:
_“CERN Data Centre: Zenodo’s data centres is located on CERN premises and all
physical access is restricted to a limited number of staff with appropriate
training and who have been granted access in line with their professional
duties (e.g. Zendo staff do not have physical access to the CERN Data Centre)
._
_Servers: Zenodo’s servers are managed according to the CERN Security Baseline
for Servers, meaning e.g. remote access to our servers are restricted to
Zenodo staff with appropriate training, and the operating system and installed
applications are kept updated with latest security patches via our automatic
configuration management system Puppet._
_Network: CERN Security Team runs both host and network based intrusion
detection systems and monitors the traffic flow, pattern and contents into and
out of CERN networks in order to detect attacks. All access to zenodo.org
happens over HTTPS, except for static documentation pages which are hosted on
GitHub Pages._
_Data: Zenodo stores user passwords using strong cryptographic password
hashing algorithms (currently PBKDF2+SHA512). Users’ access tokens to GitHub
and ORCID are stored encrypted and can only be decrypted with the
application’s secret key._
_Application: Zenodo employees a suite of techniques to protect session from
being stolen by an attacker when logged in and run vulnerability scans against
the application._
_Staff: CERN staff with access to user data operate under_ _CERN Operational
Circular no. 5_ _, meaning among other things that_
_○ staff should not exchange among themselves information acquired unless it
is expressly required for the execution of their duties._
14 http://about.zenodo.org/infrastructure/
_○ access to user data must always be consistent with the professional duties
and only permitted for resolution of problems, detection of security issues,
monitoring of resources and similar._
_○ staff are liable for damage resulting from any infringement and can have
access withdrawn and/or be subject to disciplinary or legal proceedings
depending on seriousness of the infringement.”_
#### 6 ETHICAL ASPECTS
Answers to the following questions help to achieve FAIR data.
**6.1 ARE THERE ANY ETHICAL OR LEGAL ISSUES THAT CAN HAVE AN IMPACT ON DATA
SHARING? THESE CAN ALSO BE DISCUSSED IN THE CONTEXT OF THE ETHICS REVIEW.**
The Q-SORT Consortium has not identified any specific ethics issues related to
the Work Plan, outcomes or dissemination.
#### 7 CONCLUSION
This first release of the Open Data Management Plan specifies the principles
and actions that Q-SORT will embrace in order to make most of its data FAIR.
Q-SORT will revisit the Open Data Management Plan in the following two
releases, which will be updated -if necessary- according to any need which
might arise as a consequence of the actions and activities undertaken to
achieve the objectives of the project.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1317_ReCO2ST_768576.md
|
# 1 Executive Summary
Horizon 2020 aims to make all data generated through its funded projects as
widely accessible as possible while also protecting personal and sensitive
data. This is achieved through participation of the Horizon 2020 funded
projects in the Open Research Data Pilot (ORDP). RECO2ST participates by
default in the ORDP form which a series of obligations result:
1. All data should be Findable, Accessible, Interoperable, Re-usable (FAIR)
2. All data needed to validate the results presented in publications should be granted open access
3. Legal requirements according to Article 29.3 of the Grant Agreement
4. A Data Management Plan has to be prepared.
The present document is the first version of the Data Management Plan for the
RECO2ST project. The Data Management Plan is a living document that will
evolve and be updated along the course of the project. In the current version
is presented a summary of the data that will be collected and generated from
the project including purpose of data collection/generation as well as origin,
type and format of data. A long series of data is expected to be
collected/generated originating from various sources and in various formats.
The use of historical data and data utility are also identified.
Moreover, the actions to be taken for making data FAIR are outlined. RECO2ST
will take all necessary actions and steps in order to make the data findable,
openly accessible, interoperable and reusable as soon as possible. This will
be achieved by selecting credible repositories that allow open access to data
and providing appropriate licences that allow data reuse. When restrictions
are deemed necessary in finding, accessing or re-using certain datasets, these
are stated along with justification. Finally, a first planning of data
management and data security is presented.
# 2 Introduction
Horizon 2020 aims to make all knowledge generated through its funded projects
as widely accessible as possible by granting Open Access to scientific
publications and research data. The intention is to support the development of
research and innovation in the EU by building on and improving earlier work
that can be easily findable, accessible and re-usable. At the same time
industries and the public should benefit from this knowledge at no cost and
have the ability to access it online. Specifically for research data,
participation to the Open Research Data Pilot is required by default for all
Horizon 2020 funded projects, as of the Work programme 2017 and covers all
thematic areas [1].
The RECO2ST project participates in the Open Research Data Pilot Programme.
Therefore, the following obligations apply with regards to data management:
1. **All research data should be FAIR** : Findable, Accessible, Interoperable and Re-usable[2]
_Examples of data include statistics, results of experiments, measurements,
observations resulting from fieldwork, survey results, interview recordings
and images_ ([2], pg. 6).
2. **Open access** should be provided to the **data needed to validate the results presented in publications** (including other data that are specified in the DMP)[2]
3. **Legal requirements**
According to the Article 29.3 “Open access to research data” of the Grant
Agreement:
_Regarding the**digital research data** generated in the action (‘ **data**
’), the beneficiaries must: _
1. _deposit in a research data repository and take measures to make it possible for third parties to access, mine, exploit, reproduce and disseminate — free of charge for any user — the following:_
1. _**the data, including associated metadata** , needed to validate the results presented in scientific publications as soon as possible; _
2. _other data, including associated metadata, as specified and within the deadlines laid down in the 'data management plan';_
2. _**provide information** — via the repository — **about tools and instruments** at the disposal of the beneficiaries and necessary for validating the results (and — where possible — provide the tools and instruments themselves). _
_…the beneficiaries do not have to ensure open access to specific parts of
their research data if the achievement of the action's main objective, as
described in Annex 1, would be jeopardised by making those specific parts of
the research data openly accessible. In this case, the data management plan
must contain the reasons for not giving access._
4. **A** **Data Management Plan** has to be prepared that defines: what datasets will be generated/processed, whether/how these datasets will be accessible, how data are going to be stored/preserved, sensitive data protection, making data FAIR, which datasets will be closed and reasons [2].
The present document is the Data Management Plan for the RECO2ST project and
addresses the obligations that derive from the participation in the ORDP. The
DMP structure and contents follow the template that is proposed in [3].
# 3 Data Summary
The RECO2ST project aims at developing near zero energy, cost effective and
healthy retrofit solutions for the aging building stock of Europe. This aim
will be achieved through a systemic three step approach:
* Deployment of a Refurbishment Assessment Tool (RAT) that will create refurbishment scenarios
* Adoption of Integrated Project Delivery (IPD) for the formulation of renovation Action Plans
* Development and installation of a refurbishment package (Retrofit-Kit) that will include innovative and customizable technologies for personalised renovation.
Four demonstration sites (apartment buildings in Spain, Switzerland, UK and
Denmark) have been chosen for implementing and validating the RAT tool and the
Retrofit-Kit personalised renovation solutions.
An additional early adopter site (Friburg CH) will also be used in order to
collect early data from the early version until final version of the RECO2ST
RAT and related tools.
For the development of the tools and the retrofit solutions each work package
will collect and/or generate a series of data as follows:
**Data Collection and Generation within WP2**
* Project KPIs
In Task 2.1 the overarching aim of data collection and generation will be the
matching of users’ expectations with outputs of proposed retrofit solutions
(and their controls) in terms of energy reduction, cost, safety and indoor
environmental conditions.
Task 2.1 will collect the end user requirements (thermal comfort, indoor air
quality (IAQ), daylighting, outdoor connectivity and controllability of
environmental conditions) through prevailing building benchmarks and end-
users’ perceptions will be collected by direct consultation and historical
energy consumption files. These will be matched with environmental conditions,
energy savings and costs in order to derive KPIs that satisfy the user
expectations and produce a set of parameters for input into the sensor and
energy management system that will be developed in WP5. Legislative
requirements for each demo-site (in Denmark, Switzerland, Spain and the UK);
state-of-the-art regulatory guidelines and industrial best practices to
achieve nZEBs; occupants’ comfort expectations (thermal, visual, acoustical,
indoor air quality and safety) and the functional requirements and controls
for the adoption of the solutions have been reported by recently funded
European projects and other sources cited in the Deliverable 2.1 (End user
requirements).
* Specifications
Technical specifications of the proposed nZEB renovation components will be
provided by technology providers participating in the project and will be
collected in Task 2.2. These will be aligned with the KPIs of energy and
environmental conditions.
* Performance prior retrofitting
Data will be collected in order to capture the buildings’ performance prior to
and after retrofitting.
An initial energy audit will take place as part of Task 2.3 in order to form
the baseline performance of the buildings prior to retrofitting. The energy
audit will collect information such as structural and geometrical features,
thermal properties of the envelope, electric systems, internal loads,
occupancy schedules, etc.
Up to date energy use will be determined based on historical energy invoices.
Energy demand and consumption of the buildings will be derived from
simulations. Existing energy/power invoices can further enhance the energy
consumption definition.
The data collected and generated through WP2 will be used as input in
following work packages.
* **Data Collection and Generation within WP3** WP3 will develop the RAT tools.
Task 3.1 will create a new database structure derived from EPIQR+ database
(buildings dimensional coefficients, costs and materials) and data collected
from EU building information databases regarding building materials,
regulations and legislation relevant to the time of building construction,
location and type of the building to be refurbished, as well as the current
legal framework. The management of LCA-related data, relevant to WP3, will be
addressed under WP8 with which the LCA activities are linked
The RAT tools (web-based interface) will be developed in task 3.1 and will
communicate with modules (like ROWBUST by TNI, ECOSOLUTIONS by GWAT, LCA by
Quantis, Energy auditing and modification and integration by UCA) to extract
data and results needed to evaluate refurbishment scenarios (Task 3.2).
This platform will also collect user/client information (contact information,
address, building information extract from modules) in order to manage user
connection and navigation between buildings refurbishment scenarios.
* **Data Collection and Generation within WP4**
In WP4 the technologies and materials that will be considered for the Retrofit
Kit will be developed.
In Task 4.1 and subtask 4.1.2 the properties of the VIPs will be measured,
analysed and datasets generated.
Task 4.2 will collect the power output data and carbon emission savings data
of PV installations with the use of energy meters and other suitable sensors.
Task 4.4 is responsible for HVAC selection and shall collect equipment data
related to performance, cost, environmental data and end of life.
Through Task 4.5 the impact of cooling solutions on the cooling consumption
reduction will be quantified through advanced thermal simulations.
Finally, Task 4.6 will investigate the implementation of nature based
technologies. In Task 4.6.1 air disinfection data will be collected after an
assessment with silver ions, H2O2 and UV-radiation. Furthermore data of the
microbial spore load of the air will be collected through tests with regular
air suction tests and grow medium.
* **Data Collection and Generation within WP5**
WP5 will develop a Wireless Sensor Network (WSN) with the purpose to collect
various data that will feed the assessments to be carried out in other work
packages (WP2, WP4, WP6 and WP7).
In task 5.1 various data will be collected with WSN. The data to be collected
include: temperature, light, humidity, airflow, occupancy, electricity, gas,
CO2, etc. Especially for the VIP panels, RFID sensors will collect the
internal gas pressure of the panels.
Furthermore, in Task 5.2, in order to maintain the KPIs, the WSN will be
integrated with control algorithms which will generate appropriate actuating
signals (water/Air flow rate, valve position etc.) and suitable set-point
trajectories for HVAC components.
In task 5.3 user feedback such as comfort levels, request for improved air
quality, change in ambient temperature, etc. will be collected through the
WSN.
* **Data Collection and Generation within WP6** WP6 is intended to validate the RECO2ST tools.
In Task 6.1 a test facility will be prepared to enable the validation of a
number of technologies developed in WP4. These include sensing, HVAC,
(Intelligent Energy management System) IEMS, and smart windows. Various tests
will be conducted and monitored using temperature, relative humidity, CO2,
Particulate Matter (PM), and volatile organic compounds (VOC) sensors. Data
from these sensors will be collected and stored for evaluation of technology
performances.
Tasks 6.2 and 6.3 will generate data related energy consumption, cost of
retrofit scenarios, and indoor environmental quality data (temperature,
relative humidity, CO2, VOCs and PMs)
Task 6.4 shall replicate the tests performed in tasks 6.2 and 6.3 in a non-
residential setting and produce the same types of data.
* **Data Collection and Generation within WP7**
In WP7 data will be collected from the demonstration sites.
Task 7.1 is connected to Task 2.3 and WP5 for the establishment of a
monitoring plan comprising the initial energy audit (Task 2.3) and monitoring
after retrofitting (through WSN developed in WP5).
In tasks 7.2 through 7.5 the sites will be prepared. Through these tasks
architectural and engineering plans will be developed. Furthermore, these
tasks will be the installation site and data source for the WSN that will be
developed in WP5.
Task 7.6 along with the evaluation of the results will perform a cost benefit
analysis. For the analysis, energy performance of various scenarios
considering the implementation of alternative competing technologies will be
calculated using energy calculation software. For that purpose, data of the
alternative technology options will be collected from publicly available data.
**Data Collection and Generation within WP8**
WP8 will investigate the commercialisation of RECO2ST.
In Task 8.3 LCA and LCC analyses will be performed. For that purpose available
data in existing Life Cycle Inventory (LCI) databases will be used in order to
populate the LCA and LCC modules of the developed RECO2ST RAT tools or the
existing tools interacting with the RECO2ST RAT (e.g. EPIQR, ECOSOLUTIONS,
ROWBUST). Additionally, data gaps will be identified and will be filled by
collecting LCA data available in the literature as well as expertise input
from the technical partners in the project. LCC data used to ensure data
consistency with LCA may be generated as well.
In addition to LCI data available from public and/or commercial databases,
custom processes (LCI data) will be developed to meet the needs of the RECO2ST
project. The management of such data will be clarified with the database
owners. The availability of data generated by Quantis shall be discussed with
the partners involved with the platform development, in agreement with the
exploitation strategy.
Task 8.4 will collect GIS aerial photographs of every demonstration site in
order to determine indices of (a) aspect (height to width), (b) plan density
(built foot print), (c) fabric density (vertical surface), (d) green density,
(e) thermal mass (specific heat capacity), and (f) surface albedo of the
district.
These indices will be used in order to calculate air flow and temperature
field using CFD.
Furthermore, Task 8.4 will produce weather files of future climate change
scenarios, applicable to the demonstration sites. For that purpose, location
specific weather files will be selected (or generated through METEONORM if not
available).
In subtask 8.4.2 interior temperature, humidity, CO2 levels, VOCs and
particulate matter data will be collected regarding the capacity of a climate
wall technology to provide air-conditioning and sanitation. The WSN technology
will support monitoring and collection of data. Moreover, literature values of
cost and efficiency data of competing technical installations such as split
airconditioning units and humidifiers will be collected and compared to the
values of the RECO2ST nature-based technologies. CO2 reduction from the
application of nature-based technologies will also be analysed.
Task 8.5 includes the development of a Business Model Kit for residential
building renovation. The
Business Model Kit will address the whole value chain of the renovation
process and will detail the value proposition (energy savings, thermal and
visual comfort and increased air quality, real estate value upgrade based on
higher building ratings, etc.), key partners and stakeholders, activities and
resources, the economies of scope and scale, funding mechanisms and financing
schemes. In order to complete this task, several types of data may be
collected in close contact with T8.6 (Assessment of early adopter sites). This
may include energy, building, environment, and occupants/tenants behaviour
data; for the latter in the form of consolidated anonymised data). In
addition, financial data as well as business data from stakeholders involved
in a building refurbishment may be collected (presumably throughout surveys,
interviews or specific data request).
Finally, in Task 8.6 early stage data from early versions until final versions
of the RAT and related tools will be collected on so called “early adopters
sites” in the city of Fribourg (Switzerland). Specifically, energy
consumption, energy efficiency measures (EcoSolutions-Audit), Investment needs
(EPIQR-analysis) as well as LCA data from the implementation of the RECO2ST
tools on an early adopter site will be collected (this may include energy,
building, environment, and occupants/tenants behaviour data; for the latter in
the form of consolidated anonymised data).
## 3.1 Origin, type and format of data
In the following table the origin, type and format of data, that will be
collected from the various tasks as described above, are presented (Table 1).
_Table 1: Origin, type and format of data that will be collected/generated
from the project_
<table>
<tr>
<th>
**Data**
</th>
<th>
**Origin**
</th>
<th>
**Type**
</th>
<th>
**Format**
</th> </tr>
<tr>
<td>
WP2
</td> </tr>
<tr>
<td>
**User expectations about thermal comfort, IAQ,**
**daylighting**
</td>
<td>
Literature review (Publicly available information: legislations, standards,
technical reports, guidelines)
</td>
<td>
Quantitative
</td>
<td>
Reports (.doc, .pdf), Comma-
Separated Values
(.csv), Diagrams
</td> </tr>
<tr>
<td>
**User expectations about connectivity to outdoors and controllability of**
**environmental conditions**
</td>
<td>
Literature review (Publicly available information: legislations, standards,
technical reports, guidelines accomplished and ongoing projects,
scientific papers)
</td>
<td>
Qualitative
</td>
<td>
Reports (.doc, .pdf), Comma-
Separated Values
(.csv), Diagrams
</td> </tr>
<tr>
<td>
**KPIs of user expectations**
</td>
<td>
Literature review (Publicly available information:
</td>
<td>
Quantitative
</td>
<td>
Reports (.doc, .pdf), Comma-
Separated Values
</td> </tr> </table>
<table>
<tr>
<th>
**Data**
</th>
<th>
**Origin**
</th>
<th>
**Type**
</th>
<th>
**Format**
</th> </tr>
<tr>
<td>
</td>
<td>
legislations, standards, technical reports, guidelines)
</td>
<td>
</td>
<td>
(.csv), Diagrams
</td> </tr>
<tr>
<td>
**Specifications of nZEB components**
</td>
<td>
Literature review
(Publicly available information: legislations, standards, technical reports)
</td>
<td>
Quantitative
</td>
<td>
Reports (.doc,
.pdf)
</td> </tr>
<tr>
<td>
**General building information (age, address, size)**
</td>
<td>
Audit
</td>
<td>
Quantitative & Qualitative
</td>
<td>
Images (.jpg),
Comma-
Separated Values
(.csv)
</td> </tr>
<tr>
<td>
**Structural and geometrical features**
</td>
<td>
Audit
</td>
<td>
Qualitative & Quantitative
</td>
<td>
Comma-
Separated Values
(.csv)
</td> </tr>
<tr>
<td>
**Thermal properties of the envelope**
</td>
<td>
Audit
</td>
<td>
Quantitative
</td>
<td>
Comma-
Separated Values
(.csv)
</td> </tr>
<tr>
<td>
**Architectural plans/sections**
</td>
<td>
Audit
</td>
<td>
Qualitative
</td>
<td>
Industry
Foundation
Classes (.ifc),
DWG (.dwg)
</td> </tr>
<tr>
<td>
**Electric systems**
</td>
<td>
Audit
</td>
<td>
Qualitative & Quantitative
</td>
<td>
Comma-
Separated Values
(.csv)
</td> </tr>
<tr>
<td>
**Appliances**
</td>
<td>
Audit
</td>
<td>
Qualitative & Quantitative
</td>
<td>
Comma-
Separated Values
(.csv)
</td> </tr>
<tr>
<td>
**Occupancy schedules**
</td>
<td>
Audit, Assumptions, Measurements
</td>
<td>
Quantitative
</td>
<td>
To Be Decided
(Figures, Tables)
</td> </tr>
<tr>
<td>
**Energy demand and consumption prior to**
**retrofitting**
</td>
<td>
Simulations, Energy invoices
</td>
<td>
Quantitative
</td>
<td>
Comma-
Separated Values
(.csv), Diagrams
</td> </tr>
<tr>
<td>
WP3
</td> </tr> </table>
<table>
<tr>
<th>
**Data**
</th>
<th>
**Origin**
</th>
<th>
**Type**
</th>
<th>
**Format**
</th> </tr>
<tr>
<td>
**User information (Contact, address, building information)**
</td>
<td>
GUI
</td>
<td>
Qualitative
</td>
<td>
To Be Decided
</td> </tr>
<tr>
<td>
**Building information (address, energy consumption, material)**
</td>
<td>
GUI and modules
</td>
<td>
Qualitative/ Quantitative
</td>
<td>
To Be Decided
</td> </tr>
<tr>
<td>
**Building information (buildings dimensional**
**coefficients, costs and material)**
</td>
<td>
EPIQR+ database
</td>
<td>
Qualitative / Quantitative
</td>
<td>
To Be Decided
(Databases)
</td> </tr>
<tr>
<td>
**Building information (material, typologies, regulations,**
**building types)**
</td>
<td>
EU databases
</td>
<td>
Qualitative / Quantitative
</td>
<td>
To Be Decided
(Databases)
</td> </tr>
<tr>
<td>
WP4
</td> </tr>
<tr>
<td>
**VIP properties**
</td>
<td>
VIP measurement and testing (guarded hot plate system, Fourier transform
infrared equipment, Mercury porosimetry, accelerated ageing
tests
</td>
<td>
Quantitative
</td>
<td>
Comma-
Separated Values
(.csv) and Images (.jpg) or any other suitable file type
</td> </tr>
<tr>
<td>
**Power output of PV arrays**
</td>
<td>
Energy meters, temperature and heat flux sensors in conjunction with a data
logger system
</td>
<td>
Quantitative
</td>
<td>
Comma-
Separated Values
(.csv) and Images (.jpg) or any other suitable file type
</td> </tr>
<tr>
<td>
**Carbon emission savings data of PV arrays**
</td>
<td>
Energy meters
</td>
<td>
Quantitative
</td>
<td>
Comma-
Separated Values
(.csv) and Images (.jpg) or any other suitable file type
</td> </tr>
<tr>
<td>
**Cooling consumption reduction**
</td>
<td>
Advanced thermal
simulations
</td>
<td>
Quantitative
</td>
<td>
Comma-
Separated Values (.csv) and Images (.jpg) or any other suitable file type
</td> </tr> </table>
<table>
<tr>
<th>
**Data**
</th>
<th>
**Origin**
</th>
<th>
**Type**
</th>
<th>
**Format**
</th> </tr>
<tr>
<td>
**Air disinfection data**
</td>
<td>
Indoor sensors at specified locations on inlet/outlet pipes and within the
room, and data collected
through data loggers
</td>
<td>
Quantitative
</td>
<td>
Comma-
Separated Values
(.csv) and Images (.jpg) or any other suitable file type
</td> </tr>
<tr>
<td>
**Microbial spore load**
</td>
<td>
Indoor temperature, airflow and heat-flux sensors at specified locations and
data collected through
data loggers
</td>
<td>
Quantitative
</td>
<td>
Comma-
Separated Values
(.csv) and Images (.jpg) or any other suitable file type
</td> </tr>
<tr>
<td>
**Advanced windows**
</td>
<td>
Indoor and outdoor temperatures,
Window gap and sun wall temperatures,
Pressure and moisture level, airflow and heat-flux sensors at specified
locations and data collected through
data loggers
</td>
<td>
Quantitative
</td>
<td>
Comma-
Separated Values (.csv) and Images (.jpg) or any other suitable file type
</td> </tr>
<tr>
<td>
**HVAC equipment data (cost, environmental data, end of life)**
</td>
<td>
Equipment provider
(equipment data-
sheet)
</td>
<td>
Quantitative
</td>
<td>
.xls
</td> </tr>
<tr>
<td>
WP5
</td> </tr>
<tr>
<td>
**Temperature, light, humidity, airflow, occupancy, electricity, gas, CO2**
</td>
<td>
WSN
</td>
<td>
Quantitative
</td>
<td>
To Be Decided
</td> </tr>
<tr>
<td>
**Comfort levels, request for improved air quality, change in ambient
temperature**
</td>
<td>
WSN
</td>
<td>
Quantitative
</td>
<td>
To Be Decided
</td> </tr>
<tr>
<td>
**Internal gas pressure of VIP**
</td>
<td>
RFID sensors
</td>
<td>
Quantitative
</td>
<td>
Comma-
Separated Values
(.csv) and Images (.jpg) or any other suitable file
</td> </tr> </table>
<table>
<tr>
<th>
**Data**
</th>
<th>
**Origin**
</th>
<th>
**Type**
</th>
<th>
**Format**
</th> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
type
</td> </tr>
<tr>
<td>
</td>
<td>
WP6
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Temperature, relative**
**humidity, CO2, PM**
</td>
<td>
Sensors in monitored
test cells
</td>
<td>
Quantitative
</td>
<td>
To Be Decided
</td> </tr>
<tr>
<td>
**VOC**
</td>
<td>
</td>
<td>
Sensors in monitored test cells, microbial tests
</td>
<td>
Quantitative
</td>
<td>
To Be Decided
</td> </tr>
<tr>
<td>
**HVAC efficiency**
</td>
<td>
</td>
<td>
Monitored test cells
</td>
<td>
Quantitative
</td>
<td>
To Be Decided
</td> </tr>
<tr>
<td>
**Energy consumption**
</td>
<td>
</td>
<td>
Demo sites
</td>
<td>
Quantitative
</td>
<td>
To Be Decided
</td> </tr>
<tr>
<td>
**Cost of retrofit scenarios**
</td>
<td>
</td>
<td>
Demo sites
</td>
<td>
Quantitative
</td>
<td>
To Be Decided
</td> </tr>
<tr>
<td>
**IAQ**
</td>
<td>
</td>
<td>
Demo sites
</td>
<td>
Quantitative
</td>
<td>
To Be Decided
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
WP7
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Architectural plans**
</td>
<td>
</td>
<td>
Responsible
Architect of each demo site
</td>
<td>
N/A
</td>
<td>
.dwg or .pdf
</td> </tr>
<tr>
<td>
**Engineering plans**
</td>
<td>
</td>
<td>
Responsible Engineer of each demo site
</td>
<td>
N/A
</td>
<td>
.dwg or .pdf
</td> </tr>
<tr>
<td>
**Energy performance**
**alternative scenarios**
</td>
<td>
**of**
</td>
<td>
Responsible partner for each site
</td>
<td>
Quantitative
</td>
<td>
To Be Decided
</td> </tr>
<tr>
<td>
**Alternative competitive**
**technologies’ data**
</td>
<td>
Publicly available information
</td>
<td>
To Be
Decided
</td>
<td>
To Be Decided
</td> </tr>
<tr>
<td>
</td>
<td>
WP8
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**LCA and LCC data** (including LCA database for EPIQR and RAT platform in
relation with
WP3)
</td>
<td>
Existing public and/or commercial Life Cycle Inventory (LCI) databases,
Literature, custom processes developed by Quantis, expert judgement of
technical partners
</td>
<td>
Quantitative and qualitative
</td>
<td>
Report (.doc, .pdf), LCI processes and
datasets (.xlsx)
</td> </tr> </table>
<table>
<tr>
<th>
**Data**
</th>
<th>
**Origin**
</th>
<th>
**Type**
</th>
<th>
**Format**
</th> </tr>
<tr>
<td>
**GIS photos**
</td>
<td>
Web accessible data
</td>
<td>
Qualitative
</td>
<td>
To Be Decided
</td> </tr>
<tr>
<td>
**Site indices: aspect** (height to width), **plan density** (built foot
print), **fabric density** (vertical surface), **green density** , **thermal
mass** (specific heat capacity), **surface albedo of the district.**
</td>
<td>
Analysis of GIS
photos
</td>
<td>
Qualitative and/or quantitative
</td>
<td>
To Be Decided
</td> </tr>
<tr>
<td>
**Site air flow and temperature field**
</td>
<td>
CFD
</td>
<td>
Quantitative
</td>
<td>
To Be Decided
</td> </tr>
<tr>
<td>
**Weather files of future climate**
</td>
<td>
Web accessible data
</td>
<td>
Quantitative
</td>
<td>
To Be Decided
</td> </tr>
<tr>
<td>
**Location specific weather files**
</td>
<td>
Selected (or generated through METEONORM if not available).
</td>
<td>
Quantitative
</td>
<td>
To Be Decided
</td> </tr>
<tr>
<td>
**Air-conditioning and sanitation data of climate wall**
</td>
<td>
Monitored with WSN
</td>
<td>
Quantitative
</td>
<td>
To Be Decided
</td> </tr>
<tr>
<td>
**Cost and efficiency of**
**competing technologies**
</td>
<td>
Literature
</td>
<td>
Qualitative and/or quantitative
</td>
<td>
To Be Decided
</td> </tr>
<tr>
<td>
**CO2 Reduction**
</td>
<td>
From LCA databases (or equivalents)
</td>
<td>
Quantitative
</td>
<td>
To Be Decided
</td> </tr>
<tr>
<td>
**Financial data as well as business data from stakeholders involved in a
building refurbishment**
</td>
<td>
Collected by means of interviews, surveys or specific data request.
</td>
<td>
Qualitative and/or quantitative
</td>
<td>
To Be Decided
</td> </tr>
<tr>
<td>
**Early stage data (energy consumption, building data, owners/tenant data,
environment quality (including temperature, humidity etc.))**
</td>
<td>
Collected during assessment (local visit of the
buildings), instruments, databases, occupants surveys
</td>
<td>
Qualitative and/or quantitative
</td>
<td>
To Be Decided
</td> </tr> </table>
## 3.2 Re-use of existing data
### 1.1.1 Historical Energy Data
For the purpose of Task 2.3 and the performance assessment of the buildings,
energy/power consumption of the buildings will be characterised by collecting
previous energy/power invoices, where possible. Historical energy data
available from prior years can further enhance the formation of the baseline
for performance evaluation. Historical energy consumption files will also
support the development of end users’ perceptions in Task 2.1.
### 1.1.2 Legislations – regulations - benchmarks
Prevailing building benchmarks will be consulted for the collection of end-
users’ requirements in task 2.1. Furthermore legislative requirements will be
collected for each demo-site as well as stateof-the-art regulatory guidelines
and industrial best practices to achieve nZEBs.
### 1.1.3 Existing Building Information Databases
The EPIQR+ database and EU building information databases will be exploited in
Task 3.3 in order to support the generation of retrofit scenarios.
### 1.1.4 Publicly available data of technologies
Publicly available data of alternative competitive technology solutions will
be collected in Task 7.6 so as to be used as input in the energy simulations
for the evaluation of alternative scenarios. Furthermore cost and efficiency
data of competing technologies that are available in literature will be used
for comparison to the values of the RECO2ST nature based technologies.
### 1.1.5 Existing LCA databases and literature
LCA data will be collected from existing Life Cycle Inventory (LCI) databases
and from the literature in order to populate the LCA & LCC module of the
RECO2ST tools and support the LCA analysis in Task 8.3. The management of such
existing data will be clarified with the database owners during the project.
### 1.1.6 Weather Files
Location specific weather files will be selected (or generated through
METEONORM if not available) in order to support the creation of future climate
weather files in Task 8.4
## 3.3 Size
At this stage of the project, with respect to WP2, the deliverables that have
been produced are:
* D 2.1 – report, word format (.docx), size 2.5 MB
* D 2.2 – report, word format (.docx), size 1.2 MB
In WP4 the data collected from lab testing of the components and full-scale
systems installed at various demo-sites will be in the excess of 10GB.
The expected size of the data that will be collected form the remaining
various tasks of the project will be estimated later on as the project
progresses.
Specifically with relation to the tools, the size of data will depend on order
of magnitude for each assessment.
## 3.4 Data utility
First and foremost, the data collected and generated through each work package
will be used as input for other work packages of RECO2ST. Overall, the data
collected and generated in the RECO2ST project will be useful source for
testing, developing and validating similar tools as well as for allowing
interoperability with other tools for similar purposes.
The data that will be generated regarding energy consumption before and after
renovation as well as regarding user expectations can be used as a source of
literature and reference for comparison with similar projects in identifying
trends and the potential of certain retrofit interventions and technologies.
Furthermore, the weather files of future weather scenarios that will be
developed will be a valuable source in assessing future energy savings, as
well as the impact of urban environment on proposed retrofits.
The collection of LCA & LCC data and the development of a Life Cycle Inventory
(LCI) database of renovation works will be useful to any stakeholder
interested in integrating sustainability aspects in building renovation
projects. Especially, using the LCA/LCC module of the RAT platform and the
associated data, decision makers will have a solid basis to compare building
renovation scenarios based on their environmental and economic performances
and identify where their main hotspots are. In addition, by providing
quantitative results of product’s environmental footprint, LCA results will
also help ReCO2ST technology developers by informing on the sustainability
performances of their developed solution.
The data collected pertaining to the design and performance of the energy
efficient and generation technologies will be useful for (i) manufactures and
suppliers; (ii) specifiers and installers; (iii) building owners/managers and
occupants; (iv) policy makers. The aforementioned groups may be within and/or
outside the current project. Moreover, the data generated and collected
regarding the impact of the proposed nature-based technologies will aid
interior greening and nature-based solution providers to assess the quantity
and type of available solutions, and provide useful indicators to building
owners/managers and occupants.
Finally, the data collected regarding user habits, expectations and responses
to system changes can be used to optimize thermal comfort settings and
understand user’s needs. This data will be valuable for comparison across
demonstration sites as well as within a given multi-dwelling building.
# 4 FAIR Data
## 4.1 Making data Findable
RECO2ST data will be made findable as an obligation of participation in the
ORDP.
The building energy consumption, user settings (i.e. thermal set points),
indoor air and environmental data (i.e. CO2 levels, indoor air temperature)
will all go into a database for safe storage. Using a standardised taxonomy
for building energy data based on the Building Energy Data Exchange
Specification (BEDES)[4] for buildings the data will be findable using
standard tools and search criteria.
## 4.2 Making data openly Accessible
As a requirement resulting from participation in the ORDP and Article 29.3 of
the Grant Agreement, all data should be made openly accessible. The data will
be made available at the end of the project. All data from WP4 will firstly be
made accessible to the current partners of the project and after securing
Intellectual Property (IP) will be made accessible to all.
The data will be made accessible by deposition in repositories. All the data
will be published on the certified free-access repository Zenodo [5]. If
deemed necessary, partners can select to deposit their data in other
repositories that can be found through re3data.org [5],[6].
Certain datasets will be restricted from being openly available as justified
below by the partners responsible for collecting/generating these datasets:
* **WP3** : The user/client information that will be collected in the RAT tools (GUI) cannot be openly available respecting the GDPR. The same applies to the data extracted from the modules connected to this web application. The EPIQR+ database will also not be openly available in order to protect the refurbishment cost method that has been developed by ESTIA in the last 20 years.
* **WP4** : All data pertaining to the design and performance of new technologies will not be made available open access to any audience until IPs have been protected and any patent applications have been filed. Once the IP is protected, the data can be deposited on open access sources.
* **WP7** : Respecting the General Data Protection Regulation (GDPR)[7], data belonging to tenants cannot be shared. The data issued from sensors (temperature, humidity) or invoices (electrical consumption) can only be shared if made anonymous with no possible link to tenants. Tenants’ rents and all personal information will not be shared. Also, data belonging to contractors, architects, engineer etc. cannot be shared. The data issued form tender processes (especially financial offer) can only be shared if made anonymous with no possible link to contractors, architects and engineers. All personal information from contractors, architects, engineers will not be shared. Further provisions and detailed implementation of personal data protection in agreement with the GDPR will be addressed by Work Package 10 and Deliverable 10.1 NEC-Requirement No.1.
* **WP8** : The data related to demonstration sites and Retrofit-Kit components/technologies will be collected either from partners by means of data collection templates (mostly spreadsheet or word processor files) or from literature. The data from the templates (input data) will then be implemented in the LCA models in professional LCA software SimaPro (simapro.com) but will not be made available unless specified otherwise by the data owners. LCA/LCC results will be made available publicly as part of D8.3. The latter will refer to the assessment of the demonstration sites (business-as-usual vs. renovation scenarios) and individual Retrofit-Kit components/technologies. Only the aggregated LCA and LCC results shall be presented. No sensitive (disaggregated) data shall be reported in D8.3.
Any documentation on energy efficiency calculation algorithms which are
existing or may be developed during this project as part or related to the
EcoSolutions-tool as well as related code or knowledge are not considered as
data in the present document “Data Management Plan”, should not be used as
such, and will be addressed later if required by a separate Intellectual
Property agreement formalized in a Collaboration Agreement (CA).
Concerning data collected during the analysis of the Fribourg (CH) early
adopter sites, an anonymised version will be available within the repository.
The data has to be treated in order to protect the owner and users/inhabitants
of the site.
## 4.3 Making data Interoperable
All data and metadata will be produced in vocabularies, standards and formats
compliant with the selected open access repository requirements and thus will
be interoperable. Open source software will be used in the project so as to
increase the transferability and repeatability of its output.
## 4.4 Increase data Re-use
Appropriate licenses, such as the Creative Commons (CC BY) license will be
used for allowing data Re-use [8].
Data will be made available for re-use after the end of the project. The data
that are associated to the developed technologies, tools and processes will be
made available as soon as IPR and/or patents have been acquired so as to
secure commercial rights on technologies. The re-usability of the LCI database
implemented in the RAT platform will be aligned with the exploitation strategy
(global platform + individual components, e.g. EPIQR) and decided along the
project.
Peer-reviewed scientific publications of the project's results will be 'gold'
open access articles, and will be published immediately in open access mode.
Demo sites owners can make data of the demonstration sites available for re-
use only if it does not prejudice the partner’s commercial image. It should
not be possible for a third party to state publicly that the partner’s
refurbishments are unfair for tenants meaning costly, out of schedule, out of
respect etc.
# 5 Allocation of resources
According to Article 6, of the Grant Agreement, costs for open access and
protection of data can be justified under D.3 Costs of other goods and
services, and in accordance with Article 10.1.1 and Article 11.1.
Each work package leader/task leader will be responsible for managing the data
collected through their task. The project coordinator will be responsible for
overall data management.
Specifically for the personal data, “data owners”, “collectors” and
“processors” will be identified in accordance with the GDPR.
# 6 Data security
The data will be physically stored on PCs’ and virtual machines’/servers’ hard
drives. The PCs and servers will be located in a secure location in the
partner institutions or IT service providers and will be password protected.
Access to these PCs and servers will be granted to staff directly involved
with the RECO2ST project. Institutional and departmental level backup
mechanisms and policies will cover data backup during the project’s lifetime.
Additionally data can be backed-up to an external hard drive. If data transfer
is required, data will be transferred between sites using secured network
connections to recognised standards as implemented by each institution’s
central IT services.
Depending on the requirements of the use case and demonstrator scenarios and
the available IT infrastructure, the data can be collected in 15 minute
intervals and stored on a local server or in the cloud. The backup procedures
will be as needed for each data owner and processor and can be set for the
available storage capacity. Each scenario will need to ensure that the data is
secure during collection and storage as well as transport between servers or
users.
Certified repositories will be selected for long term preservation and
curation of data.
# 7 Ethical issues
The ethical and legal issues that are related to data sharing and protection
of personal data are managed within Work Package 10 and Deliverable 10.1 NEC-
Requirement No.1. In this aspect, the provisions and requirements of GDPR will
be implemented for the protection of personal data. Informed consent will be
included for data collected through questionnaires or interviews.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1318_I4MS-Go_768631.md
|
<table>
<tr>
<th>
**DoA**
</th>
<th>
Description of Action
</th> </tr>
<tr>
<td>
**EC**
</td>
<td>
European Commission
</td> </tr>
<tr>
<td>
**H2020**
</td>
<td>
Horizon 2020
</td> </tr>
<tr>
<td>
**GA**
</td>
<td>
Grant Agreement
</td> </tr>
<tr>
<td>
**CA**
</td>
<td>
Consortium Agreement
</td> </tr>
<tr>
<td>
**KPI**
</td>
<td>
Key Performance Indicator
</td> </tr>
<tr>
<td>
**FSTP**
</td>
<td>
Financial Support to Third Parties
</td> </tr> </table>
**Figure 2** . Versioning flow 10
**Figure 3** . Screenshot document history 11
**1\. Data Management Plan delivery and updates**
The Data Management Plan (DMP) has been elaborated in agreement with all
project partners. This document is a first version, as requested by the EC.
Changes that has been introduced as a result of law changes – new GDPR
regulation get into force. It will be updated over the course of the project
in the cases stated in the Guidelines on FAIR Data Management in Horizon 2020:
* significant changes such as new data;
* changes in consortium policies;
* changes in consortium composition and external factors;
* among other that might be of relevance.
2. **Data Summary**
As described in the Guidelines on FAIR Data Management in Horizon 2020 a Data
Management Plan is a key element to ensure data is well managed. For this
reason, we will firstly identify the type of data that will be generated in
the framework of the project:
1. Data generated from accessible information such as reports published on the I4MS website news, events, open calls, evolutions and novelties of the smart manufacturing and I4MS ecosystem among other topics related to the objectives of the project;
2. Data generated from project partners and external evaluators activities, such as discussions with the members of the board and key stakeholders of the ecosystem, evaluation reports, establishment of project priorities, development of the acceleration programme tools, evaluation reports among other work carried out in order to achieve project goals;
3. Data generated involving third party (beneficiaries of the Disruptors Awards), such as details of projects submitted under the two calls for proposals, results obtained during the acceleration programme, research data, interviews and presentations.
4. Data generated through the I4MS platform: this virtual platform is set up to raise awareness on theI4MS project and its objective, facilitate the interactions between the members of the ecosystems offering an online tool to connect and discover new business and funding opportunities, as well as to receive first-hand information about best practices of SMEs having received funding for their digital transformation processes, I4MS key enabling technologies, technological support and assessment available and offered by the Innovation Actions under the I4MS initiatives among others.
According to another classification, which does not exclude the previous one,
there are two types of data collected:
* Personal data
* Data related to the business activity of the participants and their participation in the program.
1. **State the purpose of the data collection/generation**
FBA, as responsible for launching the Disruptors Awards, a prize to select top
SMEs and mid-caps having undergone through a digitisation process.
Applications of the most innovative application experiments will be received
through an online form within FundingBox Platform which will be used during
the project´s Open Calls. The information gathered will serve to evaluate and
award the most promising application experiments in the area of I4MS.
Therefore, it is necessary to collect, store and process the online forms that
will be submitted by applicants.
Data will be exploited for three main purposes:
* evaluation of proposals;
* impact assessment;
* research.
The anonymised datasets will be exploited through the creation of maps and
charts that will be updated at the end of the selection process of each Open
Call. The maps and charts generated will be publicly shown as part of the
dissemination activities of the project. The full data set of anonymised data
will be also available for third parties, mainly the European Commission
services, that would request access to the information for research purposes.
2. **Relation of data collection to the project objectives**
The data sets to be collected during the Open Calls in order to facilitate
good analysis of proposals include (non-exhaustive list): Country;
Organization name; Project name; Address; Manufacturing sector, Innovation
potential, business potential, project having funded the application
experiment, TRL level, description of the Application Experiment, etc. All
these data sets will be represented in a mapping of submitted proposals. The
selected proposals are a subgroup of the submitted proposals and are
identified with the field. It is expected to generate a relevant deal-flow of
applications through the I4MS calls along the project which will contribute to
create five main data sets:
1. Applicants that start an application but don't submit a proposal;
2. Submitted proposals;
3. Evaluated proposals;
4. Winners;
5. Follow up metrics.
3. **Data protection**
Datasets will be anonymised for impact assessment and research purposes. The
personal data collected as part during the project will be limited to the
project submission and informed consent of participants about the use of
personal data will be required. Personal identity will be protected by the use
of anonymous codes. The relation of real names and codes will only be known to
FBA who will keep the records in secure place. The relation of applications
will be coded and will be available for internal evaluators with such a
coding. In case data needs to be transferred to nonEU partners, we will obtain
approvals from the competent Data Protection Office, unless those countries
are in the list of countries that provide adequate safeguards with respect to
the protection of the privacy and fundamental rights and freedoms of
individuals and as regards the exercise of the corresponding rights. All
copies of approvals /notifications regarding the processing of personal data
will be made available upon request to the EC. Personal data will be encrypted
and stored securely.
Personal data will be processed in accordance to the GDPR. Administrator of
the personal data obtained during the open calls is FBA. FBA provides
Applicants with the information concerning personal data processing.
4. **Types and formats of data generated/collected**
The type of data collected will include specific indicators to evaluate the
potential of the proposals. Such indicators include measurements of the
innovation potential and maturity of proposals, the team and the organisation
proposers, the technology used and technology experience, the market
orientation, the financial aspects and the benefits expected. Generic
information is being collected in textual or numeric format, while the data
regarding value propositions will be collected in a multiple-choice format.
5. **Origin of the data**
The information will be captured through online forms and will be recorded and
stored in FBA
Cloud infrastructure as an object database. The information will be accessible
through an online Dashboard application and it will be downloadable in csv and
xls formats. Only authorised users will be allowed to access the data sets via
authentication.
6. **Data utility: to whom will it be useful**
The data will be exploited by project partners and external evaluators for
three main purposes:
1. evaluation of proposals;
2. benefiting from the I4MS prize package.
3. research
4. dissemination.
The individual registers will be only accessible for evaluation purpose to be
done by the selected evaluators. Each evaluator will be granted with a limited
access to a restricted number of registers from the data set. Before giving
evaluators access to the data they will be requested to sign online using a
secure mode via authentication mode: ‘Acceptance of the use of data (GDPR)’
and a ‘Declaration of confidentiality and no conflict of interest’.
7. **Intellectual Property Rights (IPR)**
In general, foreground (e.g. results including intellectual property generated
during the project) will be owned by the party who reaches the results. The
same will apply for the results achieved by SMEs and mid-caps who act as
beneficiaries of I4MS Disruptors Awards (third parties). All the knowledge,
data and results deriving from the application experiments carried out by the
beneficiaries will remain as their property only.
Each partner and the ‘Third party Beneficiary’ is responsible for taking the
appropriate steps for securing intellectual property of the knowledge or
results created during the application experiment. In any case, I4MS CSA will
follow the general principles for IPR as described in the ‘Model Grant
Agreement for the Horizon 2020 Framework Program’.
3. **Fair Data**
The I4MS CSA project will integrate the data from all the applications
participating in the open call. The collection of data through an online
application form will facilitate data integration having the information of
third parties structured in a standard form and also the interviews and
articles published in the I4MS platform.
1. **Making data findable, including provisions for metadata**
This document explains in detail how the data management plan will support the
effective collection and integration of the I4MS data. Storage, processing and
sharing will occur via the FundingBox proposal submission platform and
different events and meetings and also via the I4MS website and blog. During
the Disruptive Awards data will be anonymised meaning that data will not
identify any individuals.
1. **_Discoverability of data (metadata provision)_ **
In order to be able to use the data generated by the project is essential to
integrate data from the participants in the open calls and the activities
undertaken by project partners. Taking into account the FAIR data principles
(Wilkinson et al., 2016) (meta)data should:
* Be assigned to a globally unique and persistent identifier; • Contain enough metadata to fully interpret the data, and;
* Be indexed in a searchable source.
By applying these principles, data becomes retrievable and includes their
authentication and authorisation details.
2. **_Data identification mechanisms_ **
All documents associated to one particular project will be identified with a
unique and persistent number that will be given at the time of the submission
process.
Examples:
* 001ApplicationForm
* 001FinancialIdentification
* 001Demo
* 001EvaluationReport
* 001FinalMonitoring
As per the documents related to project activities and/or deliverables, the
tasks or deliverables number will be used to identify the document followed by
a brief title of the activity or deliverable.
3. **_Naming conventions used_ **
The recommendations to name documents of the project and facilitate its
retrievability are as follows:
* Choose easily readable identifier names (short and meaningful);
* Use capital letters to delimit words instead of spaces or underscores;
* Do not use acronyms that are not widely accepted;
* Do not use abbreviations or contractions;
* Avoid Language-specific or non-alphanumeric characters;
* Add a two-digit numeric suffix to identify new versions of one document.
* Dates should be included back to front and include the four-digit years: YYYYMMDD.
4. **_Approach towards search keyword_ **
Documents related to the activities of the project will be done following the
templates agreed by the consortium, these templates include a keywords section
to make documents findable.
The information submitted by the applicants to the open calls will use
keywords related to the industry where digitisation processes have taken place
using the technologies covered by I4MS such as: footwear, welding, agri-food,
aircraft, automotive...
The keywords used to easily identify documents related to a specific project
will be the ones used throughout the submission process, where applicant will
have to select the characteristics of their projects selecting descriptors
from a dropdown menu.
An excel spreadsheet with all information about the projects will be done in
order to identify projects submitted for example under one specific challenge.
Excel will be an efficient tool to filter projects by its characteristics. The
“export” functionality of FundingBox platform allows such a listing.
**3.1.5._Approach for clear versioning_ **
Only documents created by the consortium will be versioned, for this purpose
templates include 3 descriptors to identify the versions and status of the
documents:
<table>
<tr>
<th>
DOCUMENT HISTORY
<table>
<tr>
<th>
Version
</th>
<th>
Stat
</th>
<th>
us
</th>
<th>
Date
</th>
<th>
</th>
<th>
</th>
<th>
Comments
</th>
<th>
Author
</th> </tr>
<tr>
<td>
1
</td>
<td>
</td>
<td>
XX
</td>
<td>
</td>
<td>
</td>
<td>
XX
</td>
<td>
</td>
<td>
</td>
<td>
XX
</td>
<td>
</td>
<td>
</td>
<td>
XX
</td>
<td>
</td> </tr>
<tr>
<td>
2
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
3
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
4
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
</th> </tr>
<tr>
<td>
</td>
<td>
Status (a status is associated to each step of the document life cycle)
</td> </tr>
<tr>
<td>
Draft
</td>
<td>
This version is under development by one or several partner(s)
</td> </tr>
<tr>
<td>
Under review
</td>
<td>
This version has been sent for review
</td> </tr> </table>
Figure 1. Screenshot status of the document
Moreover, partners, following the recommendations included in section 3.1.3.
will identify the different versions by using a two-digit number following the
descriptor Draft. A document reviewed by another partner should be returned to
the principal author by including rev+acronym of the organisation. Only the
principal author will change the draft number and will add the word FINAL to
documents ready to be sent to the EC or those to be used as final versions.
The process is as follows
The document history included in the document template should be filled in as
follows: DOCUMENT HISTORY
<table>
<tr>
<th>
Version
</th>
<th>
Status 1
</th>
<th>
Date
</th>
<th>
Comments
</th>
<th>
Author
</th> </tr>
<tr>
<td>
1
</td>
<td>
Draft
</td>
<td>
01/02/2017
</td>
<td>
Section 2.1. needs to be completed
</td>
<td>
ABC
</td> </tr>
<tr>
<td>
2
</td>
<td>
Under review
</td>
<td>
02/02/2017
</td>
<td>
Section 2.1. completed. Comments added in the document.
</td>
<td>
CDE
</td> </tr>
<tr>
<td>
3
</td>
<td>
Draft
</td>
<td>
04/02/2017
</td>
<td>
Added suggestions by CDE
</td>
<td>
ABC
</td> </tr>
<tr>
<td>
4
</td>
<td>
Under review
</td>
<td>
06/02/2017
</td>
<td>
Included some topics on section 2.1.
</td>
<td>
XYZ
</td> </tr>
<tr>
<td>
5
</td>
<td>
Issued
</td>
<td>
15/02/2017
</td>
<td>
Final version with partners
contributions
</td>
<td>
ABC
</td> </tr> </table>
**3.1.6._Standards for metadata creation (if any)_ **
Basic metadata will be used to facilitate the efficient recall and retrieval
of information by project partners and external evaluators and contribute to
easily find the information requested. To this end, all documents related to
the project have to include in the front-page information about author(s) &
contributor(s), WP, dissemination level, nature of the document, synopsis and
keywords.
Regarding the information submitted by application experiments beneficiaries,
the criteria included in the application form will be used as well to identify
documents and make data findable. The application from is submitted online via
the FundingBox platform using multiple-choice questions, that will facilitate
the creation of a database and the identification of projects by its
characteristics.
**3.2. Making data openly accessible**
**3.2.1._Data that will be made openly available_ **
The full data set of anonymised data will be also available for third parties
that would request access to the information for research purposes.
Furthermore, the anonymised datasets will be exploited through the creation of
maps and charts that will be updated for dissemination and communication
purposes. The maps and charts generated will be publicly shown as part of the
dissemination activities of the project.
**3.2.2._Process to make data available_ **
The availability of project data will depend on the purpose and the use that
third parties are going to make and the added value of sharing such data.
Third parties interested in using the data generated by the project will be
able to contact via the email of the project [email protected]_ . Moreover, the
Dashboard application of the FundingBox platform will also be used to share
data. Only anonymised data might be shared.
**3.2.3._Methods or software needed to access the data_ **
No specific software tools will be needed to access the data, since anonymised
data sets will be saved and stored in word, pdf or excel to facilitate its
exploitation and guarantee their long-term accessibility.
**3.2.4._Deposit of data, associated metadata, documentation and code_ **
I4MS will collect data of European SMEs, through an online form within
FundingBox Platform, which will be used during the Disruptors Awards Open
Calls. Data will be deposited and secured in the FundingBox platform.
**3.2.5._Access to data in case there are any restrictions_ **
The FundingBox platform allows to create users with different access levels.
Access can be granted online for a limited period of time, to a specific
information and using a secure mode via authentication.
**3.3. Making data interoperable**
**3.3.1._Interoperability of data assessment_ **
Partners will be responsible of storing the data in a comprehensive format and
adapted to the real and current needs of the possible practitioners interested
in using, merging or exploiting the data generated throughout the project. The
assessment of data interoperability will be updated in future reviews in order
to guarantee the I4MS data fits the needs of a specific scenario (such as
interests or purpose of data) as proposed by the GRDI2020 in its report Data
Interoperability (Pagano, P. et al. 2013).
**3.3.2._Vocabulary use_ **
The vocabulary used in the project is a very standard and common language
within the industry 4.0 ecosystem and involved technologies. Vocabulary won’t
represent any barrier for data interoperability a re-use.
**3.4. Increase data re-use (through clarifying licenses)**
**3.4.1._Data license_ **
Clauses referred to Access Rights (Section 9) and Non-disclosure of
information (Section 10) included in the Consortium Agreement (CA), as well as
deliverable 7.1 which offers a description of the project’s approach to ensure
that the applicants to the Disruptors Awards and the members of the I4MS
community conform to the ethical standards on privacy, data protection will be
key features governing the use of data by third parties.
Information related to the winning SMEs or any other communications related to
specific entities, such as the name of the entity, will be published for
dissemination purposes only after having obtained the beneficiaries’ consent.
As described in section 3.2.2. the I4MS mailbox will be the communication tool
used to request the access to data.
Regarding the data produced by sponsored projects, each beneficiary will be
responsible of permitting or restricting the access to their data and results.
**3.4.2._Data re-use availability period_ **
Statistical data related to the open calls and information about the winners
will be made accessible once the final winner is published. Other results such
as the name of participants in the I4MS Acceleration Programme will be
released in agreement with the participating SMEs and will be available 4
years after the end of the project unless otherwise stated in laws in force or
GA.
**3.4.3._Data quality assurance processes_ **
The project coordinator will be responsible of assuring the quality of the
data by making sure dataset follow the FAIR principles included in this plan,
and that data is up dated.
Personal data processing will be done following the EU, national and
international laws in force (in particular Regulation (EU) 2016/679 of the
European Parliament and of the Council of 27 April 2016 on the protection of
natural persons with regard to the processing of personal data and on the free
movement of such data, and repealing Directive 95/46/EC (General Data
Protection Regulation) taking into account the “data quality” principles
listed below:
* Data processing is adequate, relevant and non-excessive;
* Accurate and kept up to date;
* Processed fairly and lawfully;
* Processed in line with data subjects’ rights;
* Processed in a secure manner;
* Kept for no longer that necessary and for the sole purpose of the project.
**3.4.4._Length of time for which data will remain re-usable_ **
The Consortium will contribute to maintain data re-usable as longer as
possible after the end of the project. A first period of 4 years has been
established, however this time can be extended under partners agreement, laws
in force or GA. This period can vary depending on the value of the data after
the end of the project.
4. **Allocation of resources**
1. **Cost of making data FAIR**
No extra costs, apart from those linked to the maintenance of the FundingBox
platform, are expected for making data FAIR.
2. **Data management responsibilities**
Concerning the data of applicants and beneficiaries, Fundingbox will be
responsible of managing the data stored in its platform (
_https://fundingbox.com/_ ) .
Regarding the data resulting from the activities of the project, each WP
leader will be responsible for the storage and compliance of the data and then
uploaded in the I4MS online community, or other storage systems to share the
information of the project, as per included in the Data Management Plan
(D1.5).
Each partner is responsible for all obtained data during their processing and
acquisition in their own organization.
The I4MS coordinator assisted by the WP leaders will be responsible of
updating this document and develop a strategy to encourage:
* the identification of the most-suitable data-sharing and preservation methods;
* the efficient use of data assuring clear rules on its accessibility;
* the quality of the data stored and
* the storage in a secured in a user-friendly interface.
3. **Costs and potential value of long term preservation**
As stated in section 4.1. costs of data storage and maintenance are not going
to require extra funding once the project ends. As per the value of the data,
it is important to take into account that the topics covered by the project
respond to a current need of the manufacturing SMEs and mid-caps and are
related to the technological advancements of the 4 areas covered by I4MS (HPC
cloud-based simulation, IoT and CPS, Additive Manufacturing and robotics).
Therefore, data coming of this project will have a direct impact in the coming
years, but might not be of relevance as the challenges are being tackled or
replaced by other priorities.
5. **Application Data Security**
I4MS will collect data of SMEs, through an online form within FundingBox
Platform which will be used during the Disruptors Awards Open Calls and other
administration processes managed by FBA, such as registration to events. Data
will be deposited and secured in the FundingBox platform. The information will
be captured through online forms and will be recorded and stored in FundingBox
Cloud infrastructure as an object database. The information will be accessible
through an online Dashboard application and only the anonymised data will be
downloadable in csv and xls formats. Only authorised users will be allowed to
access the data sets via authentication.
The FundingBox platform applies technological and organizational measures to
secure processing of all data in particular personal data against publishing
to unauthorised persons, processing in violation of the law and change, loss,
damage or destruction.
* Information security: SSL (Secure Socket Layer) certificates are applied. In order to ensure the appropriate level of security, the password for the account will exist on the platform only in a coded form. Registration on and logging in to the platform proceeds in a secure https connection. Use of password to access data sets: the FundingBox platform offers 4 different access levels/roles (administrators, developers, evaluators and invitees) to secure access to data by unauthorised users. Communication between the User’s device and the servers will be encoded using the SSL protocol.
* Options for reading data: the platform offers the possibility to make data available in a read-only or downloadable format, hindering the access to information by unauthorised users. Once an Open Call finishes information is archived, so it’s no longer publicly accessible, only administrators will have access to the historic data in a read-only mode.
* Back-up policy: complete and redundant back-ups are done every hour. Moreover, every time a modification is done an older version is saved.
* Accidental deletion or modifications: in case of a catastrophic event that implies the partial or complete deletion of the data sets, the data from the most recent back up will be automatically restored (back-up won’t be older than 60 minutes). In case of accidental deletion or modification only the most recent document will be restored, so in case of accidental changes or deletion data can be easily recovered.
* Deletion or modification of data by users: only administrators have the rights to delete or modify the information included in the datasets. Under exceptional circumstances administrators can be given the permission to delete applications (utilities offered by the FundingBox platform) but the user responsible of its creation will be notified before doing so.
* Deletion of data by participants in open calls: users having started the application process can withdraw any time using the FundingBox platform before the deadline for submission.
* Terms and conditions: the FundingBox platform have specific terms of use and conditions that have to be accepted by all users of the platform.
o FundingBox terms of service: _https://fundingbox.com/about/terms_ o
FundingBox platform privacy policy _https://fundingbox.com/about/privacy_
Each partner is responsible for all obtained data during their processing and
acquisition in their own organization. Each partner is obliged to implement
appropriate security measures to ensure the confidentiality of the data.
6. **Public funding disclaimer**
All data produced within the framework of the project will inform of the
funding source by adding the following disclaimer and EU flag:
“This project has received funding from the European Union’s Horizon 2020
research and innovation programme under grant agreement No 768631”
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1322_CATALYST_768739.md
|
# Executive Summary
This deliverable provides an update to the initial version of the CATALYST
Data Management Plan. It comes with the refined version of information with
respect to D7.1 and provides also an update for some of the data sets reported
in D7.1, complemented by data sets coming from simulations and external
sources (other than CATALYST pilot data centres). It also contains additional
information concerning availability, licensing and security of specific data
sets. Data sets are orchestrated with trials defined in D7.3.
This document defines data management procedures and identifies data sets
related to project pilots and specific activities. It describes the data
sources and owners in the CATALYST project, together with corresponding data
security and preservation aspects. Description of the data sharing platform,
to which some of the CATALYST data can be published, is provided. The CATALYST
Data Access Committee, defined within this Data Management Plan, will make
decisions about the use and potential sharing of specific instances of data
sets obtained during the project course. This document reflects the current
state of the project data, and we do not preclude inclusion of other data sets
or changes if such need occurs. The first results containing the use of
specific data set instances will be described in D7.4 due on M26.
This Data Management Plan follows the template provided by the European
Commission in the Participant Portal.
# Introduction
The CATALYST project takes part in the Open Research Data Pilot. According to
obligations related to the pilot, a final version of the Data Management Plan
(DMP) is provided in this document.
The main role of the CATALYST DMP is to identify, describe, and propose
procedures to manage the data collected and processed by the CATALYST
consortium in order to realize project objectives, which are as follows: ▪
CATALYST will adapt, scale up, validate and deploy an innovative, adaptable
and flexible technological and business framework by leveraging on FP7 GEYSER
and DOLFIN TRL 4/5 results, with the aim to provide DCs with a set of TRL 6/7
enabling solutions and tools, which:
* Use and trade the wasted DC heat to lower the energy footprint, reduce DC energy costs and create a new DC income source over longer time;
* Assess resiliency of energy supply and flexibility, against adverse climatic events or abnormal demand, trading off DC assets energy generation/consumption against local/distributed RES, energy storage and efficiency;
* Deliver energy flexibility services to the surrounding energy (power and heat) grids ecosystems;
* Exploit migration of traceable ICT-load between federated DCs, matching the IT load demands with time-varying on-site RES surplus availability or where heat generation in needed (follow the energy approach).
* Implement novel multi-carrier marketplace mechanisms (in the form of MaaS) to support novel ESCO 2.0 like business models and secure/traceable micro-contracts.
▪ The CATALYST framework will be adaptable to a broad variety of DCs
categories, ranging from different DCs types (co-location, enterprise, HPC
DCs), to different geographical location, to different architectures (large
centralized versus fully decentralized micro-DCs) and energy efficiency
orientation.
To realise these objectives the CATALYST project collects information on DC
energy (electricity, heat and cooling) consumption and generation as well as
other resource usage, for example water consumption and IT systems
utilization. Based on this information CATALYST prepared a Data Management
Plan (updated version of the initial DMP) accommodating all necessary measures
for proper data handling. This DMP document defines how data collected or
generated by the CATALYST project is organised, stored and shared.
The aim is to gather data, which is valuable for technological and scientific
evaluation of the project achievements respecting privacy and legislation
(e.g. as mandated by GDPR EU Directive 2016/680 and Regulation 2016/679).
Those project data, which are allowed for open access, will be anonymised (if
needed) and stored in repositories including publishing to well-known open
platforms (OpenAIRE [1]).
## Intended Audience
The audience of this document are all entities interested in the results of
CATALYST project, especially the delivered data sets. Thus, this document
provides relevant information for DC managers and operators as well as
external stakeholders or researchers who would like to process the data being
the output of CATALYST. Finally, all partners providing and managing the data
in the project should get familiar with the content of this document.
## Relations to other activities
The work related to this report is correlated with activities undertaken
within other WPs and tasks, especially, the ones that result in data
provision. There is a natural dependency on the activities concerning trials
(WP7). Data sets produced by each of the trials at pilots’ side are the
subject of data management process. Additionally, there is a dependency to WP4
(providing Prediction and optimisation of DC flexibility data), WP5 (providing
Market data) and WP8 (providing Social data).
## Document overview
This report contains a final Data Management Plans (DMP) of the CATALYST
project. It is an update of the first version of provided in M6. The main
differences include:
* Other data sets than raw data from pilots, e.g. market or simulation data (in M6 details of such data sets were under discussion)
* More details on the use of OpenAIRE and Zenodo public repositories
* Updated data sets from pilots
* Information on sharing and preservation of specific data sets
The reminder of this report is as follows:
Section 3 contains a general description of data collected and used by the
CATALYST project.
Section 4 describes how these data can be identified, shared and made
interoperable.
Section 5 explains the way of financing the DMP by the project.
Section 6 contains security and ethics considerations along with references to
other regulatory frameworks.
Section 7 introduces a template to describe CATALYST data sets along with
definitions of specific project data sets.
# Data sources and owners in CATALYST
In this chapter, a general description of data collected and used by the
CATALYST project is introduced. The description includes information about
data sources and owners in CATALYST, basic data characteristics, and data sets
use. More detailed description of data including specific identified data sets
is provided in Section 7.
In CATALYST, data comes from 4 main types of sources:
* Technical data related to pilots and trials executed in the project
* Other technical data (not coming from pilots), for example simulation data
* Market data – data concerning energy market (mostly simulated)
* Social Data related to people interacting with the project
## Pilots data sources
Technical data collected and processed within the project will come mostly
from CATALYST pilots. That means data collected both from real data centres
and separated testbeds accompanied by data from surrounding systems such as
renewable energy sources, energy management systems, buildings, etc.
Apart from the data that come directly from project pilots (and partners)
there are also other sources related to potential systems configurations or
markets considered within the project. Those data sets can be either generated
by simulation tools or taken from external source if available.
The purpose of these data sets collection and processing is first of all to
test and verify CATALYST platform and solutions in several different
environments. Data coming from real environments will be used to fill CATALYST
data model and based on it perform optimisations to verify potential gains.
Various data sets will be used to validate the project approach in different
settings. Re-using these data sets will help in improvements of the project
outcomes. Those data sets that can be shared and re-used will allow others to
evaluate requirements and benefits of the CATALYST platform.
The pilots that are the main source (and owners) of data collected and
processed within CATALYST are shortly summarised below. More detailed
description of pilots and corresponding trials is included in D7.3 report:
Trial set-up, test & evaluation methodology.
* Pilot 1: ENG PSM
The ENG Pilot involves the Pont Saint Martin Green DC located in the mountain
Valle d’Aosta region. It is one of the biggest Italian data centers managing
about 350 customers and hosting about 8 Petabyte of data. The PSM DC,
classified as tier IV, works as colocation DC hosting and providing most of
innovative technology services that can be tailored to current customer
processes and scenarios. In the contest of this Project, the overall DC will
be considered as testbed.
* Pilot 2: PSNC – HPC
The PSNC pilot consists of 2 parts.
The first, micro data centre laboratory enables execution of trials with a
full control of the whole DC environment and the possibility to measure the
impact on this environment. In PSNC micro data centre laboratory there are 2
racks with ~120 server nodes (some of them being low power micro-servers). The
IT equipment of the micro DC consumes approximately 10kW of power in a maximum
load and is connected to the photovoltaic system, which consists of 80 PV
panels with 20kW peak power. Therefore, it allows performing trials with
different availability/cost of energy.
The second part of the pilot consists of the data coming from the whole main
PSNC data centre and the nearby University campus to study heat re-use
scenarios. PSNC data centre is already equipped with the heat reuse system for
office heating. Within CATALYST, a study on the possibility of the DC wasted
heat re-use by the campus is being performed.
* Pilot 3: SBP Co-location data center
The SBP pilot consists of several parts. The main ones are colocation data
centre and data centre utility area.
The former one provides information on an energy reuse system, subtracting
waste heat from DC data room hot aisles, transporting heat towards a heating
system in the DC office part, using a VRF system (heat pump system). The pilot
part is based upon calculations on energy efficiency and effectiveness of this
low temperature heating system. The reduction on primary energy use (natural
gas, high temperature heating system, replaced by heat pump power demands) is
being determined by calculations. SBP data centre is equipped with a high
temperature office heating system (natural gas boilers)
The latter one provides information on an energy reuse system, subtracting
waste heat from the DC DRUPS system flywheel part towards the DRUPS generator
part, pre-heating the generator by the use of the flywheel waste heat instead
of electrical heating systems. The pilot part is based upon calculations on
waste heat reuse effectiveness on the primary energy consumption of the DRUPS
system. Detailed description of the SBP pilot is presented in D7.3
* Pilot 4: QRN Distributed HPC
For the pilot, QRN distributed DC will use a dedicated test bed representative
of the production environment. DC controller will collect all the data usually
monitored and collected during regular operation, such as heatsink and ambient
temperature. Power consumption is also an important data monitored in QRN
infrastructure.
The test bed corresponds to one flat containing 7 heaters, i.e. 21 nodes,
therefore approximately 2.5kW maximum.
## Non-pilots technical data sources
Apart from data collected directly from infrastructure used by project DC
pilots, data related to other potential configurations (difficult to impose in
real systems), larger systems or the whole markets are used by the project.
These data can be based on the outcomes of simulations, models, etc. from
other WPs.
In particular, the simulation data complements data collected from pilots in
order to investigate various configurations and conditions. The simulation
data include the whole data centre simulation using the System of Systems
approach, Computational Fluid Dynamics (CFD) simulations, and predictions made
using machine learning methods.
For example, in order to prepare training dataset for ML algorithm, a set of
CFD simulations were performed (as described in deliverable D4.1 Smart Energy
Flexibility Modelling). The purpose was to obtain various server room unsteady
responses for diverse initial and boundary conditions. For the needs of the
study, a simplified model of a server room was utilized. The model consisted
of two server racks and raised floor cooling system. The model and all the
simulations setups were prepared with PSNC's framework dedicated for server
rooms based on OpenFOAM [ **2** ] software. Data come from simulations based
on diverse parameters: initial internal server room temperature, air
conditioner volumetric flow rate, air conditioner outlet temperature and power
consumption of both racks present in virtual room. Results are collected from
simulations covered records from virtual probes located in specific points of
the domain. It contains records of several physical variables such as
temperature, velocity, pressure and turbulence parameters. Together with
mentioned results, also configuration files of every scenario are stored to
preserve the possibility to run them for the longer time period in the future.
A definition of data sets related to simulations and predictions can be found
in Section 7.2.2.
## Market data sources
Over the course of the project, CATALYST Marketplace data will be generated in
a simulated way using fake names and values. Specifically, the following data
will be generated:
* general information about energy prices, market sessions, market actions, transactions, invoices;
* personal data about market participants, market operator, aggregators, DSO(s);
* user credentials, hashed using PBKDF2 algorithm (only the hashed value is preserved)
A definition of a data set related to the CATALYST Marketplace can be found in
Section 7.2.4.
## Social data sources
In addition to the technical work on trials with the use of software developed
within the project and applied to CATALYST pilots, it is foreseen that data
sets with information about people interacting with the project will be also
collected. For example, data about Green Data Centre Stakeholder Group (GDC-
SG) members, possibly with their opinions and knowledge, will be processed
within the project. Therefore, initial data set for this kind of data has been
also defined in this deliverable.
The purpose of creating these data sets is to receive feedback from relevant
stakeholders of the energy flexibility ecosystems based on data centres.
Generally, data will consists of personal data, opinions, minutes, and
questionnaires. These data sets will be used internally for steering of
project development. The conclusions coming from the analysis of the data sets
related to GDC-SG may be used as interesting source of information about
future of data centres and as a validation of applicability of the CATALYST
concepts and results.
A definition of a data set related to GDC-SG can be found in Section 7.2.3.
# Data sharing and re-use
## Making data findable, including provisions for metadata
Data sets are identified based on a simple taxonomy.
The _Data Source_ is defined as:
* pilot
* external
* simulated
* other
_Infrastructure type_ defines specific system, from which data is collected:
* data centre
* colocation
* enterprise
* cloud
* HPC
* distributed
* renewable energy source,
* smart grid, ▪ others
_Infrastructure purpose_ is defined as:
* testbed,
* real production infrastructure
* mixed environment
Each data set related to trials will be identified with the following
_Identifier_ : [TrialID/PilotID/InfrastructureID]. Additionally, the data sets
will be identified by time, in which the corresponding data was collected.
For those data sets that can be shared the metadata will be defined according
to requirements of sharing platforms (as defined in Section 4.2.3).
Data collected and processed within the project will be stored either using a
simple key-value format or the CATALYST data format, which is based on
extended ICT Topology Graph Database from the DOLFIN project. Hence, data will
be provided in a universal and simple way allowing its analysis and potential
re-use.
## Data sharing
### Data sharing procedure
Data collected, generated and processed by the CATALYST project will be
available in the following locations:
* CATALYST website,
* Local repositories, testbeds and websites such as _http://labee.psnc.pl_ (at PSNC),
* Storage systems for monitoring data in data centres and other infrastructure (e.g. BMS in buildings),
* External repositories such as OpenAIRE.
Depending on security constraints and data reusability, selected data sets
will be moved to public repositories. To this end, restrictions on access must
be defined for each data set (see Section 7). To control this process a Data
Access Committee will be established consisting of project coordinator, data
management officer and data owner.
The sources of data sets in CATALYST along with a procedure for data sharing
decisions are illustrated in Figure 1.
Figure 1 Data set sources and procedure for data sharing decisions
Data sets will be mostly generated by pilots within WP7, however, other
sources related, for instance, to marketplaces or any other data
generated/used by other work packages are also possible as indicated in the
diagram (and summarised in Sections 3 and 7). When a data set is defined and
is going to be collected and/or used in the project the data owner or WP
leader must inform the Data Management Officer. The Data Access Committee
decides whether and how this data set will be shared. If there are
restrictions due to internal policies of data owners decision about denying
access can be taken. The Data Access Committee may also decide to pre-process
data before sharing, for example, performing anonymization or restricting
access to only parts of data. For shared data, license and access details will
be defined (e.g. location, identifiers). Depending on the volume, usefulness
and time when information is outdated specific sets can be accessible from
local repositories or public open repositories such as OpenAIRE [ **3** ].
### Data sets availability
The following table summarizes the availability of particular data sets:
Table 1 - Available data sets
<table>
<tr>
<th>
Data set
</th>
<th>
Owner
</th>
<th>
Accessibility
</th>
<th>
Where
</th> </tr>
<tr>
<td>
Micro data centre
</td>
<td>
PSNC
</td>
<td>
Public (for
selected data)
</td>
<td>
Local ( _https://labee.psnc.pl/_ , for registered users) and public (Zenodo)
repositories (for selected data)
</td> </tr>
<tr>
<td>
Photovoltaic system
</td>
<td>
PSNC
</td>
<td>
Public (for
selected data)
</td>
<td>
Local ( _https://labee.psnc.pl/_ , for registered users) and public (Zenodo)
repositories (for selected data)
</td> </tr>
<tr>
<td>
Main data centre
</td>
<td>
PSNC
</td>
<td>
Only within the project
</td>
<td>
Local repositories
</td> </tr>
<tr>
<td>
PUT buildings
</td>
<td>
PUT
</td>
<td>
Only within the project (PSNC)
</td>
<td>
Local repositories
</td> </tr>
<tr>
<td>
Distributed DC Testbed
</td>
<td>
QRN
</td>
<td>
Only within the project
</td>
<td>
Local repositories
</td> </tr>
<tr>
<td>
DC CFD simulations
</td>
<td>
PSNC
</td>
<td>
Public
</td>
<td>
Local ( _https://labee.psnc.pl/_ , for registered users) and public (Zenodo)
repositories (for selected data)
</td> </tr>
<tr>
<td>
GDC-SG
</td>
<td>
GIT
</td>
<td>
Only within the project
</td>
<td>
Local repositories
</td> </tr>
<tr>
<td>
PSM
</td>
<td>
ENG
</td>
<td>
Only within the project
</td>
<td>
Local repositories
</td> </tr>
<tr>
<td>
Prediction and optimisation of DC flexibility
</td>
<td>
TUC
</td>
<td>
Public
</td>
<td>
Local repositories (for selected data) and public repositories (for selected
data)
</td> </tr>
<tr>
<td>
Waste heat reuse offices
</td>
<td>
SBP
</td>
<td>
Public
</td>
<td>
Local repositories
</td> </tr>
<tr>
<td>
Waste heat reuse utilities
</td>
<td>
SBP
</td>
<td>
Public
</td>
<td>
Local repositories
</td> </tr>
<tr>
<td>
Comparison UPS systems
</td>
<td>
SBP
</td>
<td>
Public
</td>
<td>
Local repositories
</td> </tr>
<tr>
<td>
Drinking Water Cooling
</td>
<td>
SBP
</td>
<td>
Public
</td>
<td>
Local repositories
</td> </tr>
<tr>
<td>
Electricity Flexibility Emergency Power pool: 1: Agreement
</td>
<td>
SBP / ALD
</td>
<td>
Public
</td>
<td>
Local repositories
</td> </tr>
<tr>
<td>
Electricity Flexibility Emergency Power pool: 2:
test data
</td>
<td>
SBP / ALD
</td>
<td>
Only within the project
</td>
<td>
Local repositories
</td> </tr>
<tr>
<td>
IT load migration
</td>
<td>
SBP / QRN
</td>
<td>
Only within the project
</td>
<td>
Local repositories
</td> </tr> </table>
### Sharing CATALYST data sets in public repositories
Data generated and gathered within the CATALYST project, that meets data
protection requirements, will be provided to the public access using OpenAIRE
[ **3** ].
OpenAIRE stands for Open Access Infrastructure for Research in Europe. It
started as a project and, nowadays, become an organization that aims at
exposing entities supporting Open Access policies. It consists of a
decentralized network of data sources including data and literature
repositories, publishers and research information systems. By these means, it
facilitates combining data and publications together with the corresponding
projects.
OpenAIRE is not a repository itself. Instead, it harnesses the contents of
various (compatible) publication and data repositories and exposures
corresponding metadata. Nevertheless, it supports, to a certain degree,
sharing and publishing data, by guiding the user through this process. It
enables selecting a proper repository according to the domain of the data or
suggesting repository related to the data provider’s institution. The third
option is to use Zenodo repository [ **4** ] that originated from OpenAIRE.
OpenAIRE delivers also the content provider management system. It does not
support repository creation, but similarly to data sharing, it suggests
repositories that could be linked to OpenAIRE. These repositories have to be
prior registered in OpenDOAR (in case of literature ones) or in re3data
registry [ **5** ] (in case of data repositories).
As mentioned, the system is designed towards proper project metadata
management and labelling. It focuses on managing the metadata but not the
data. To facilitate this, it provides a dedicate API allowing developers to
access the metadata information space. Bulk access is available via Open
Archives Initiative Protocol for Metadata Harvesting (OAI-PMH). There is also
a possibility to use HTTP API for selective access to publications, data,
software and projects. In order to work with OpenAIRE, there is account (and
thus registration) needed.
Zenodo, offered as a solution for data sharing by OpenAIRE, is a general-
purpose open-access repository operated by CERN [ **6** ] and developed under
the OpenAIRE program. It is already registered within the re3data registry,
and thus can be seamlessly and easily used as a repository accessible via
OpenAIRE. Publishing data with Zenodo is pretty straightforward. Zenodo’s
system offers easy drag and drop mechanism. In this way publications,
datasets, images, software, presentations, etc. can be uploaded. The only
restriction is the size of the data, which is limited to 50 GB per data set.
Zenodo offers a possibility to define the basic characteristic of the uploaded
data like:
* Title,
* Authors,
* Publication Data,
* Short Description, ▪ Keywords,
* Digital Object Identifier
Moreover, it offers a fine-grained definition of access rights:
* Open Access
* Embargoed Access
* Restricted Access ▪ Closed Access
together with corresponding data licenses.
Last but not least, related projects and grants can be defined. Of course,
there is a lot of additional information that can be assigned to each of the
uploaded data set. Their full list can be seen in [ **7** ].
Apart from that Zenodo, can be integrated with popular services like Dropbox -
for uploading files or GitHub - for code sharing.
Zenodo comes with two APIs for managing the data: REST API that currently
supports only upload and publishing; and OAI-PMH that allows the user to
harvest entire repository via the Open Archives. In order to work with Zenodo,
one should create an account in advanced.
All the aforementioned features make Zenodo a nice alternative repository in
case of lack of data storage infrastructure or proper, re3data registered,
repository on particular project’s participants sites.
To be registered in re3data registry, a research data repository has to meet
the following requirements [ **5** ]:
* must be run by a legal entity, such as a sustainable institution (e.g. library, university)
* must clarify access conditions to the data and repository as well as the terms of use
* must be focused on research data
Figure 2 summarizes the procedure of sharing data with OpenAIRE
Figure 2: Sharing data in public repositories using OpenAIRE
Using the Zenodo platform as a repository, it is also possible to define the
set of aforementioned metadata that can be later on use to find the given
publication/data set. The following information is especially relevant from
the perspective of CATALYST project.
* Data type: publication, data set, presentation, etc
* Publication date – date of the publication (Zenodo does not offer the possibility to define the date range in which the data was gathered – this could be a part of Description section)
* Title – identifying the data scope
* Authors – authors/providers of the data
* Description – description of the data (due to the general purpose of the Zenodo repository, and thus, the lack of specified metadata, this section should provide a detailed description of the data)
* Language – language in which the data is provided
* Keywords – keywords identifying the data (similarly to the description - this section specify the information typical for the given data)
* Access rights – access rights to the data
* License – license under which the data will be available
* Grant/project – reference to the CATALYST project, grant number
* Contributors – other people who contributed to the given data (in case of collaboration with other projects)
* References – references to papers/presentations in which the data was mentioned
## Making data interoperable
If possible, CATALYST will follow existing good practises, standards and data
formats to make data interoperable and ensure compliance.
Plan towards interoperability will include:
* Application of recognized CATALYST good practices, e.g.: o Generic format used by DOLFIN to store measurement data – flexible enough for various data description
* Data model based on GEYSER experiences to describe the data centre along with its ecosystem (interfaces to marketplaces, etc.) for the CATALYST optimiser
* Taxonomy for describing DC applied in CATALYST – based on existing classifications of DC, their subsystems, etc.
* Taking into consideration a compliance with relevant standards and regulations when establishing measurement procedures, data naming, etc.,
* Applying Green Grid PUE monitoring guidelines [8] when performing energy measurements and calculating PUE (standard PUE – EN50600-4-2)
* If calculating the Renewable Energy Factor (REF) referring to standard EN50600-4-3 [9]
* Checking if applied strategies are related to best practices defined in the EU Code of Conduct (and CLC/TR 50600-99-1 standard to which these practices have been incorporated) [10]
* Referring to ASHRAE TC 9.9 2015 for thermal guidelines and server types [10]
* Looking at other EN50600 standard series, e.g. EN50600-4-4 and EN50600-4-5 if KPIs such as IT Equipment Energy Efficiency and Energy Utilisation for Servers must be monitored [9]
* Publishing the data in public repositories with well defined metadata, standards of storing the data and access interfaces
## Increase data re-use (through clarifying licences)
The data management plan establishes how licensing will be addressed in the
project lifetime according to the identified data set. At this stage of the
project, we initially classified the project data sets and provided the
positioning of the project in reference to licensing. In the frame of the
project, three main categories of data are identified as follows:
* Project data sets for dissemination: this kind of data will be open access by default in order to validate scientific publications; this can include some of data coming from pilots, simulations and results of studies. ▪ Data sets produced by work packages: this kind of data will be reused in accordance with the specific WP policy. For example some data created and stored during the project can be for internal management and communication within the consortium. Some other can be shared outside the project and useable by third parties.
* Technical data sets: those data are about the Pilots and are mainly about pilots measurement collected within the project timeframe. Simulation data are considered as well. General assumptions regarding these data sets are as follows:
* Data sets that enclose personal data cannot be shared as-is for privacy reasons. This is the case of QRN DC which handle multiple radiators deployed in residential/office buildings. Even if these data cannot be tied to one person or household, they have to be treated cautiously; the QRN data set will be shared only within the CATALYST project members upon request.
* Data sets that require aggregation or anonymizing for security or commercial reasons prior to release. This is the case of PSM DC: raw data can be shared only for the objective of the trial and in the scope of the project; otherwise aggregated data, the final result of the CATALYST solution exploited in the PSM Pilot, can be shared not only for the objective of the trial. o Data sets that can be shared without any restriction (but according to the defined license): this is the case of PSNC Micro data centre testbed
* Data sets that cannot be shared outside the project: this is the case of PSNC DC related to the High Performance Computing centre, SBP DC related to the collocation data centre, and data from BMS of Poznan University of Technology (PUT) buildings.
More specific licenses/restrictions are defined per each data set below.
### Pilot data
* Pilot 1: ENG PSM
Table 2 - ENG PSM data sets
<table>
<tr>
<th>
Data set
</th>
<th>
Owner
</th>
<th>
Restrictions
</th>
<th>
License
</th>
<th>
Availability after the end of the project
</th>
<th>
Justification (if needed)
</th> </tr>
<tr>
<td>
Pilot 1: ENG PSM
</td>
<td>
ENG
</td>
<td>
Proprietary data. Not public
</td>
<td>
Use in the project
</td>
<td>
No
</td>
<td>
Customer data
</td> </tr> </table>
* Pilot 2: PSNC – HPC
Table 3 - PSNC - HPC data sets
<table>
<tr>
<th>
Data set
</th>
<th>
Owner
</th>
<th>
Restrictions
</th>
<th>
License
</th>
<th>
Availability after the end of the project
</th>
<th>
Justification (if needed)
</th> </tr>
<tr>
<td>
Micro data
centre
</td>
<td>
PSNC
</td>
<td>
Shared
</td>
<td>
Open Data
Commons
(ODC-BY)
</td>
<td>
Yes (selected sets)
</td>
<td>
</td> </tr>
<tr>
<td>
Photovoltaic system
</td>
<td>
PSNC
</td>
<td>
Sharing subset of data
</td>
<td>
Open Data
Commons
(ODC-BY)
</td>
<td>
Yes (subsets)
</td>
<td>
</td> </tr>
<tr>
<td>
Main data
centre
</td>
<td>
PSNC
</td>
<td>
Critical data. Not public
</td>
<td>
Use in
project
</td>
<td>
the
</td>
<td>
No
</td>
<td>
Internal DC
data
</td> </tr>
<tr>
<td>
PUT buildings
</td>
<td>
PUT
</td>
<td>
Proprietary data. Not public
</td>
<td>
Use in
project
</td>
<td>
the
</td>
<td>
No
</td>
<td>
Data owned by PUT
</td> </tr> </table>
* Pilot 3: SBP Co-location data center
Table 4 - SBP data sets
<table>
<tr>
<th>
Data set
</th>
<th>
Owner
</th>
<th>
Restrictions
</th>
<th>
License
</th>
<th>
Availability after the end of the project
</th>
<th>
Justification (if needed)
</th> </tr>
<tr>
<td>
Waste heat
reuse offices
</td>
<td>
SBP
</td>
<td>
Shared
</td>
<td>
Open Data
</td>
<td>
Yes, calculation data
</td>
<td>
</td> </tr>
<tr>
<td>
Waste heat
reuse utilities
</td>
<td>
SBP
</td>
<td>
Shared
</td>
<td>
Open Data
</td>
<td>
Yes, calculation data
</td>
<td>
</td> </tr>
<tr>
<td>
Comparison UPS systems
</td>
<td>
SBP
</td>
<td>
Shared
</td>
<td>
Open Data
</td>
<td>
Yes, overview data
</td>
<td>
</td> </tr>
<tr>
<td>
Drinking Water Cooling
</td>
<td>
SBP
</td>
<td>
Shared
</td>
<td>
Open Data
</td>
<td>
Yes, calculation data
</td>
<td>
</td> </tr>
<tr>
<td>
Electricity
Flexibility
Emergency Power pool: 1: Agreement
</td>
<td>
SBP / ALD
</td>
<td>
Shared
(anonymized)
</td>
<td>
Open Data
</td>
<td>
Yes, text
document
</td>
<td>
Anonymization
for
confidentiality reason
</td> </tr>
<tr>
<td>
Electricity
Flexibility
Emergency Power pool: 2:
test data
</td>
<td>
SBP / ALD
</td>
<td>
Critical data, Not public
</td>
<td>
Use in the project (basic data)
</td>
<td>
No
</td>
<td>
DC data,
confidentiality
</td> </tr>
<tr>
<td>
IT load migration (pilot continuity pending)
</td>
<td>
SBP / QRN
</td>
<td>
Critical data, Not public
</td>
<td>
Use in the project (basic data)
</td>
<td>
No
</td>
<td>
DC data,
confidentiality
</td> </tr> </table>
* Pilot 4: QRN Distributed HPC
Table 5 - QRN data sets
<table>
<tr>
<th>
Data set
</th>
<th>
</th>
<th>
Owner
</th>
<th>
Restrictions
</th>
<th>
License
</th>
<th>
Availability after the end of the project
</th>
<th>
Justification needed)
</th>
<th>
(if
</th> </tr>
<tr>
<td>
Distributed Testbed
</td>
<td>
DC
</td>
<td>
QRN
</td>
<td>
Proprietary
data. Not
public
</td>
<td>
Use in the project
</td>
<td>
No
</td>
<td>
Personal data
</td>
<td>
</td> </tr> </table>
### Non-pilots data
Table 6 - Non-pilots data sets
<table>
<tr>
<th>
Data set
</th>
<th>
Owner
</th>
<th>
Restrictions
</th>
<th>
License
</th>
<th>
Availability after the end of the project
</th>
<th>
Justification (if needed)
</th> </tr>
<tr>
<td>
DC CFD
simulations
</td>
<td>
PSNC
</td>
<td>
Shared
</td>
<td>
Open Data
Commons
(ODC-BY)
</td>
<td>
Yes (selected
sets)
</td>
<td>
</td> </tr>
<tr>
<td>
Prediction and optimisation of DC flexibility
</td>
<td>
TUC
</td>
<td>
Shared
</td>
<td>
Open Data
Commons
(ODC-BY)
</td>
<td>
Yes (selected
sets)
</td>
<td>
</td> </tr> </table>
### Social data
Table 7 - Social data sets
<table>
<tr>
<th>
Data set
</th>
<th>
Owner
</th>
<th>
Restrictions
</th>
<th>
License
</th>
<th>
Availability after the end of the project
</th>
<th>
Justification (if needed)
</th> </tr>
<tr>
<td>
GDC-SG
</td>
<td>
GIT
</td>
<td>
Public information only after members have
granted their permission
</td>
<td>
Use in the project
</td>
<td>
Taken care by
GDC-SG
</td>
<td>
Personal data
</td> </tr> </table>
### Market data
Simulated Marketplace data will be used only for the CATALYST experimentation
objective and in the scope of the project. Access to marketplace data
generated during the marketplace instance operation is subject to
authorization control and permission rights granted per user group, when
attempted both programmatically or through the marketplace user interface.
User credentials are hashed using PBKDF2 algorithm.
# Allocation of resources
## Allocation of resources within project lifetime
Allocation of resources is about costs for making research data findable,
accessible, interoperable and reusable (FAIR) in the project.
These costs can be different according to the specific data set that is
considered.
Regarding the provision of DMP, the related costs involves WP7 - _Trials &
Performance Validation _ since Task 7.1 – _Specification of trials and
evaluation methodology_ is responsible for the preparation of data management
plan. These costs are covered by budget associated to WP7.
PSNC as WP7 leader is responsible along with other WP7 (especially ENG)
partners for the overall data management process.
Resources can be also related to the long-term preservation of data. How those
data will be kept for specific data sets is discussed in Section 5.2. However,
it can be assumed that storage of pilot-related data sets and potential access
beyond the project lifetime will be covered by partner formally responsible of
CATALYST pilots.
## Covering long-term preservation of data
In this section the identification of resources for storage of pilot-related
data sets (and others) and potential access beyond the project lifetime is
discussed. Information for each data set owner are summarised below. _PSNC_
Subsets of data sets concerning PSNC’s micro data centre, photovoltaic system,
and simulation data will be stored after the end of the project according to
the definition in Section 4.4. Financing of these data storage will be by PSNC
through the PSNC micro data centre laboratory and LABEE portal (
_http://labee.psnc.pl_ ) and other projects that make use of either this
environment or specific data set available at this portal. Additionally, the
most important data sets considered for public sharing will be shared via
OpenAIRE/Zenodo.
#### _QRN_
Data corresponding to QRN distributed operation is stored into production
database for internal purposes.
Therefore these data will be stored on a long-term basis by QRN and thus we’ll
be retrievable after the project. Data created during the project for the
project needs will be stored on an internal repository and be stored according
the project requirements.
#### _GIT_
Data related to GDC-SG members will be handled by GDC-SG after the project
ends. The core members of the group will be responsible for defining
organizational structure being in charge for storing and sharing the _data_ .
#### _TUC_
The data sets concerning the thermal ad energy prediction and flexibility
optimization will be maintained by TUC team and stored on distributed systems
research laboratory secured data stage servers as well as in specific
platforms for public sharing via OpenAIRE.
#### _ENG_
Data will be preserved in the DC during the project and for a period of 1 year
after the end of the Project. The associated cost will be covered by ENG.
# Data security and Ethical aspects
## Data security
Data security is about the provision of data protection in the project
lifetime. The aim of DMP is to provide information about data recovery, secure
storage and transfer of sensitive data as well.
The DC pilots-related data sets will be stored in pilot DCs where there is
high level of security and redundancy to prevent failures and data losses.
Certified repositories will be taken into account for project data that has to
be curated and preserved for long time. Some repository like OpenAIRE will be
considered to share data outside the project.
More detailed information (concerning the security level at data set owners
data storage, recover possibilities, etc.) is provided according to the
specific data set below.
### ENG-PSM
Measurement from PSM DC are stored at PSM side. Only users belonging to the
Project consortium could access to testbed measurements. These data will be
stored on the CATALYST Data Base that will be deployed in PSM in a server
accessible only through a VPN; only authenticated users will access to this
DB. All security mechanisms, applied to the overall DC, are therefore extended
to the PSM testbed.
### PSNC
Measurements from PSNC data centre are stored at PSNC side, in the BMS
providing secure storage (without external access) and data replication.
Measurements from PSNC testbed will be available for the registered users
(registration is provided on demand). To increase the security, entrance to
the testbed is possible via a single access point via SSH (using Key-Based
authentication). PSNC will also benefit from the data coming form Poznan
University of Technology, which also utilizes BMS that ensures secure storage
and data replication.
### SBP
Measurements from SBP data centre are stored inside BMS responsible for
ensuring proper security level.
### QRN
As personal data is involved when it comes to QRN distributed DCs, data is
pseudonymised to avoid any risks in case of data leakage. Data is stored in a
secure database hosted in a “regular” DC.
### TUC
The data will be stored in a secured data storage hosted in a cloud DC. The
over-riding priority will be to assure data confidentiality by making sure
that no identifiable information is made available. The security and
confidentiality are achieved by building appropriate access rights, encryption
and anonymization into the core data models to be provided.
### GIT
Data at GIT’s is stored within the organization's team repository. Default
security policies are used for each of the systems.
### Data outside the CATALYST project
Data that will be shared using Zenodo repository is stored in CERN Data Center
[ **11** ]. According to Zenodo, both data files and metadata are kept in
multiple online and independent replicas. Moreover, CERN guarantees to collect
and store “100s of PBs of LHC data as it grows over the next 20 years”
together with maintaining their data centre. In case of closing the operations
by Zenodo, they ensure to migrate all content to other repositories. Zenodo’s
Terms Of Use makes the uploader responsible for providing content that
complies privacy, data protection and intellectual property rights.
## Ethical aspects
Ethical aspects mainly concern personal data. If this data has to be shared,
this aspect has to be taken into account. Indeed, as stated in chapter 5 of
CATALYST Description of Action (DoA), the Consortium is aware of EU and
national legislation and policies referring to protecting personal data and
privacy, especially in the context of smart grid. The CATALYST consortium is
committed to taking all necessary measures to ensure that all project
activities comply with the European Chart of Fundamental rights and all data
protection relevant EU regulations, soft – law, standardisation and policy
initiatives.
To ensure the appropriate protection of data sets, CATALYST will be tracing
and checking the relevance for CATALYST data sets. Moreover, CATALYST will
adopt, if needed, at least the following procedures/regulations:
* GDPR (General Data Protection Regulation) for data protection
* Data protection regulation framework in smart energy grids
* The Regulation (EU) 2016/679 on the protection [12] of natural persons with regard to the processing of personal data and on the free movement of such data
* The Directive (EU) 2016/680 [13] on the protection of natural persons with regard to the processing of personal data by competent authorities for the purposes of the prevention, investigation, detection or prosecution of criminal offences
* Ad-hoc frameworks on data privacy in smart grids, including Data Protection Impact Assessment (DPIA) [14].
* Data Protection Impact Assessment Template [14] (supported by Commission Recommendation
2014/724/EU)
Moreover, the project will assure the respect of personal data protection
through a dedicated Data Protection Officer (DPO). It works at project level
in order to ensure that an appropriate data management plan is developed and
used to protect the privacy of data. As reported in the CATALYST DoA, section
3.2.1.1 Project Management Structure, the role of the CATALYST DPO will be
compliant to the GDPR (EU 2016/679, EU 2016/680). The CATALYST DPO is Dr.
Ariel Oleksiak (PSNC).
Focusing on technical data sets collected so far, in some cases the pilot data
sets can contain “personal data” like for the QRN pilot where data come from
owners of heaters. As anticipated in Section 6.1.4, data is pseudonymised to
avoid any risks in case of data leakage. Proper information about processing
of data collected from heaters is provided by QRN to users, according to GDPR
regulations.
Regarding Social data sets, potential questionnaires to collect data from GDC-
SG members may contain personal data and will include information about goals
and rules related to processing these data.
# Data sets
## Data set structure
The template used to describe data sets has been based on the guidelines from
EC enhanced by (i) specific parameters related to CATALYST needs (such as
Trial ID) and (ii) additional guidelines related to given parts (e.g. type of
data centre in the case of CATALYST DC pilots).
The template to describe data sets is given below:
Table 8 - Data sets template
<table>
<tr>
<th>
Project name
</th>
<th>
CATALYST
</th> </tr>
<tr>
<td>
ID
</td>
<td>
768739
</td> </tr>
<tr>
<td>
Project coordinator
</td>
<td>
Diego Arnone
</td> </tr>
<tr>
<td>
Project description
</td>
<td>
Converting DCs into Energy Flexibility Ecosystems
</td> </tr>
<tr>
<td>
Funder
</td>
<td>
H2020-EE-2016-2017
</td> </tr>
<tr>
<td>
Principal researcher
</td>
<td>
Who is the researcher responsible for this data set?
</td> </tr>
<tr>
<td>
Collecting period
</td>
<td>
Timeframe within which data were collected
</td> </tr>
<tr>
<td>
Trial ID
</td>
<td>
Identifier of the CATALYST trial
</td> </tr>
<tr>
<td>
Data set reference and name
</td>
<td>
What data are collected and how are they named?
* Type of infrastructure/system
(colocation/enterprise/cloud/ HPC data centre, renewable energy source, smart
grid, etc.),
* Purpose of infrastructure (testbed, real production infrastructure or mix),
* Identifier: [TrialID/PilotID/InfrastructureID]
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Description of the data, the origin, nature and scale and for what purpose
they were generated. Type of data, e.g.
* Power/energy measurements: of servers, cooling systems, other devices
* Energy production and supply: RES production, heat re-used,
* Environmental data: temperatures in server rooms, humidity, flow, etc.
* IT monitoring: utilization of servers, VM management,
* Energy market: prices of energy
* Other Source
* Origin: laboratory/infrastructure measurements, experiments, simulations, etc.
* Specific subset of infrastructure Volume
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
* Total volume of data, number of files, etc. Data and file formats ▪ Format of data
* Standards, open/common formats used
</th> </tr>
<tr>
<td>
Standards and metadata
</td>
<td>
Are there any suitable standards and what metadata will be crated?
Metadata to identify data set
* Identification of trial, pilot and specific part of infrastructure (or other systems and sources used) for which this data set has been prepared and/or used
More advanced metadata to (possibly to be synchronised with a CATALYST data
model)
* help to understand and interpret data ▪ provide details about experiment setup (who, when, in which conditions, tools, versions, etc.) ▪ help identify and discover new data
Any community standards used to enable interoperability
</td> </tr>
<tr>
<td>
Data sharing
</td>
<td>
Describe if and how the data will be shared (access, procedure, embargo
periods, technical mechanisms, necessary software and tools, repositories). In
case the data cannot be shared describe the reason. Which data will be shared?
* Final result? ▪ Intermediate data?
Data protection
* Who will have access? ▪ Is there a need to preprocess the data, e.g.
anonymize?
* What are constraints and reasons of limited access
if this is the case
</td> </tr>
<tr>
<td>
Archiving and preservation
</td>
<td>
Where and how long will the data be preserved? What is the approximated end
volume, what are the associated costs and how will they be covered?
Which data should/needs to be preserved?
▪ What has to be kept e.g. data underlying publications?
</td> </tr>
<tr>
<td>
</td>
<td>
▪
</td>
<td>
What can’t be recreated e.g. environmental recordings?
</td> </tr>
<tr>
<td>
</td>
<td>
▪
</td>
<td>
What is potentially useful to others?
</td> </tr>
<tr>
<td>
</td>
<td>
For how long?
What is the final size of data?
Where it can be stored?
What is the cost and who will pay for it?
</td> </tr> </table>
## Data sets
This section contains the description of data sets related to each pilot. It
provides corresponding updates for the data sets with respect to the
information included in D7.1. For individual data sets their description
remained the same as in D7.1
### Pilots data sets
Pilot 1: ENG PSM
<table>
<tr>
<th>
Project name
</th>
<th>
CATALYST
</th> </tr>
<tr>
<td>
ID
</td>
<td>
768739
</td> </tr>
<tr>
<td>
Project coordinator
</td>
<td>
Diego Arnone
</td> </tr>
<tr>
<td>
Project description
</td>
<td>
Converting DCs into Energy Flexibility Ecosystems
</td> </tr>
<tr>
<td>
Funder
</td>
<td>
H2020-EE-2016-2017
</td> </tr>
<tr>
<td>
Principal researcher
</td>
<td>
Marilena Lazzaro
</td> </tr>
<tr>
<td>
Collecting period
</td>
<td>
The data set will be collected in the timeframe within the application of use
cases in the PSM Pilot: M16 (January 2019) – M36 (September 2020).
</td> </tr>
<tr>
<td>
Trial ID
</td>
<td>
PSM_DC_TC_1
PSM_DC_TC_2
PSM_DC_TC_3
</td> </tr>
<tr>
<td>
Data set reference and name
</td>
<td>
_[TrialID/PSM]_
▪ colocation data centre
• Mix: measurement data from the real production DC infrastructure; only data
relevant for the CATALYST trials will be part of the testbed.
Identifier: TrialID/PSM/ColocationDC
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Measurement data from both facilities and IT Type of data:
* Power/energy measurements: Electrical consumption of cooling devices, Main incomer, Server room incomer, UPS.
* Energy production and supply: geothermal system is used for cooling the DC (bunkers and offices), heat recovery system to supply heat to office buildings.
* Environmental data: temperatures in server rooms, Chilled air/water flow rate, Relative/Absolute humidity of the server room, Ambient air temperature.
▪ IT monitoring: CPU utilization (%) of servers, Storage utilisation (%),
Network utilisation (%) Source
</td> </tr>
<tr>
<td>
</td>
<td>
* Specific subset of DC infrastructure measurements
Volume
* From tens of MB to GB depending on the time experimentation will last and on the data that will be necessary for the trials
Data and file formats
* Format of data monitored through the Honeywell OPC Server Data Access meets the specification of the OPC Data Access Standard version 1.0a or 2.0. This data are the Power/energy measurements and the Environmental data.
Format of IT monitoring data depends on the specific tool that is used like
NAGIOS for Storage utilisation (%) and ORION for Network utilisation (%).
</td> </tr>
<tr>
<td>
Standards and metadata
</td>
<td>
Data is identified by a related timeframe. Within CATALYST Trial ID will be
used to identify data sets. Other metadata will include information about data
centre type and components as defined in Section
4.1.
</td> </tr>
<tr>
<td>
Data sharing
</td>
<td>
Row Data from PSM DC can be shared only for the trial’s objective and in the
scope of the project. How this will be done is under evaluation according to
the specific use cases to be demonstrated. The limited access to this data is
due to the internal policy of PSM DC.
Aggregated Data, the final result of the PSM Pilot, can be shared not only for
the objective of the trial.
</td> </tr>
<tr>
<td>
Archiving and preservation
</td>
<td>
Currently the PSM DC allows the data achieving according to the specific
contract agreed upon the customer. These data are mainly related to alarm
events and are stored as .data files.
Regarding the data set described in this template, if historical data are
needed for the purpose of the trial, the achieving mechanism already in place
in the DC will be properly extended to the PSM pilot. Data will be preserved
in the DC during the Project and for a period of 1 year after the end of the
Project.
The associated cost will be covered by ENG.
</td> </tr> </table>
Pilot 2: PSNC – HPC
PSNC pilot consists of several parts. First of all, Poznan Supercomputing and
Networking Center operates High Performance Computing (HPC) Center mostly
addressed (not only though) to scientific users. In addition to the
operational DC PSNC provide access a testbed of the energy-efficient
technologies laboratory. Data will be complemented by detailed monitoring of
the photovoltaic system, which can be connected to the testbed.
_Micro data centre testbed_
<table>
<tr>
<th>
Project name
</th>
<th>
CATALYST
</th> </tr>
<tr>
<td>
ID
</td>
<td>
768739
</td> </tr>
<tr>
<td>
Project coordinator
</td>
<td>
Diego Arnone
</td> </tr>
<tr>
<td>
Project description
</td>
<td>
Converting DCs into Energy Flexibility Ecosystems
</td> </tr>
<tr>
<td>
Funder
</td>
<td>
H2020-EE-2016-2017
</td> </tr>
<tr>
<td>
Principal researcher
</td>
<td>
Ariel Oleksiak
</td> </tr>
<tr>
<td>
Collecting period
</td>
<td>
Selected periods from October 2017 to September 2020
</td> </tr>
<tr>
<td>
Trial ID
</td>
<td>
PSNC_DC_TC_1
</td> </tr>
<tr>
<td>
Data set reference and name
</td>
<td>
_[TrialID/PSNC/MicroDC]_
• PSNC micro data centre laboratory
Laboratory/testbed used in CATALYST trials
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Measurement data from both facilities and IT. Data from around 20-100 server
nodes with power usage in order of 3-10kW. Servers can be partially powered by
energy produced by photovoltaic system.
Type of data:
* Power usage of servers and cooling;
* Utilisation of servers;
* Temperature of servers, room and coolant (for
direct liquid cooling);
* if needed other statistics related to workload execution (e.g. OpenStack, SLURM)
Source
* Micro data centre laboratory – a testbed at which experiments related to CATALYST trials can be executed
Volume
* Currently the size of data is in the order of 30GB per one month and more than 70GB per year
(older data is stored with lower sampling rate) Data and file formats
▪ Mostly key-value pairs for various parameters. Data can be easily translated
into the CATALYST format (partially done for monitoring in DOLFIN).
Purpose
Experiments with data centres, analysis of trial results
</td> </tr>
<tr>
<td>
Standards and metadata
</td>
<td>
Data is identified by a related timeframe. Within CATALYST Trial ID will be
used to identify data sets.
</td> </tr>
<tr>
<td>
Data sharing
</td>
<td>
Data from PSNC micro data centre can be shared.
Currently are stored in file-based database at PSNC. Intermediate detailed
data can be stored in relational databases.
Data is partially available to registered users via _
https://labee.man.poznan.pl. _ Sharing of data subsets via OpenAIRE
repository is considered.
</td> </tr>
<tr>
<td>
Archiving and preservation
</td>
<td>
Measurements from micro data centre are currently stored at PSNC for at least
period of 1 year. The total volume collected during 1 year currently (using
RRD database that automatically decrease size of older data) is equal to 77GB.
If the whole high frequency data is
</td> </tr>
<tr>
<td>
</td>
<td>
stored then, depending on trials timeframe, the total volume can reach up to
0.5TB.
This data set can demonstrate details of a trial in which certain results were
achieved.
These data can be stored at PSNC.
</td> </tr> </table>
_Photovoltaic system_
<table>
<tr>
<th>
Project name
</th>
<th>
CATALYST
</th> </tr>
<tr>
<td>
ID
</td>
<td>
768739
</td> </tr>
<tr>
<td>
Project coordinator
</td>
<td>
Diego Arnone
</td> </tr>
<tr>
<td>
Project description
</td>
<td>
Converting DCs into Energy Flexibility Ecosystems
</td> </tr>
<tr>
<td>
Funder
</td>
<td>
H2020-EE-2016-2017
</td> </tr>
<tr>
<td>
Principal researcher
</td>
<td>
Ariel Oleksiak
</td> </tr>
<tr>
<td>
Collecting period
</td>
<td>
Selected periods from October 2017 to September 2020
</td> </tr>
<tr>
<td>
Trial ID
</td>
<td>
PSNC_DC_TC_1
</td> </tr>
<tr>
<td>
Data set reference and name
</td>
<td>
_[TrialID/PSNC/RES-PV]_
PSNC photovoltaic system energy production
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Monitored energy generation of the PSNC photovoltaic system. The data come
from 80 solar modules that can generate up to 20kW in peak and energy storage
(leadacid batteries of 75 kWh capacity).
Type of data:
* Values of energy produced in time
* Other parameters: P, S, Q x4, day yield, grid frequency, grid voltage x3, total yield
* Data collected every 5s
Source
* PSNC photovoltaic system (inverters, batteries)
Volume
* Currently the size of data is in the order of 25MB per one month and around 50MB per year (older data is stored with lower sampling rate) but may be increased up to 600MB if more data monitored and the whole data stored
Data and file formats
* Mostly key-value pairs of time and power (other parameters also possible). Data can be easily translated into the CATALYST format (partially done for monitoring in DOLFIN).
Purpose
Analysis of trials with on-site renewable energy use
</td> </tr>
<tr>
<td>
Standards and metadata
</td>
<td>
Data is identified by a related timeframe. Within CATALYST Trial ID will be
used to identify data sets. Other metadata will include information about the
PV system configuration and location.
</td> </tr>
<tr>
<td>
Data sharing
</td>
<td>
Data from PSNC photovoltaic system can be shared on request.
</td> </tr>
<tr>
<td>
</td>
<td>
Currently data are stored in file-based and relational databases at PSNC.
Sharing of data subsets via the LABEE portal or OpenAIRE repository is
considered.
</td> </tr>
<tr>
<td>
Archiving and preservation
</td>
<td>
Measurements from photovoltaic system are currently stored at PSNC for at
least period of 1 year. Currently, the total volume collected during 1 year
currently (using RRD database that automatically decrease size of older data)
is equal to 50MB. If the whole high frequency data is stored then, depending
on trials timeframe, depending on trials timeframe the total volume can reach
600MB.
This data set can demonstrate details of a trial in which certain results were
achieved.
This data is stored at PSNC.
</td> </tr> </table>
_PSNC data centre_
<table>
<tr>
<th>
Project name
</th>
<th>
CATALYST
</th> </tr>
<tr>
<td>
ID
</td>
<td>
768739
</td> </tr>
<tr>
<td>
Project coordinator
</td>
<td>
Diego Arnone
</td> </tr>
<tr>
<td>
Project description
</td>
<td>
Converting DCs into Energy Flexibility Ecosystems
</td> </tr>
<tr>
<td>
Funder
</td>
<td>
H2020-EE-2016-2017
</td> </tr>
<tr>
<td>
Principal researcher
</td>
<td>
Ariel Oleksiak
</td> </tr>
<tr>
<td>
Collecting period
</td>
<td>
Selected periods from October 2017 to September 2020
</td> </tr>
<tr>
<td>
Trial ID
</td>
<td>
PSNC_DC_TC_2
</td> </tr>
<tr>
<td>
Data set reference and name
</td>
<td>
_[TrialID/PSNC/DC]_
• PSNC data centre
High Performance Computing centre
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Measurement data from both facilities and IT. Data comes from 2MW data centre
consisting of diverse systems – various HPC architectures, network equipment
and others. The data includes amounts of heat re-used within the building for
office heating.
Type of data:
* Energy consumption of servers and cooling;
* Temperature of coolant (high/low)
* Flow
* Power
* Heat exchanged
* Power usage of heat exchanger pump
* Ambient temperature
* Possibly other parameters – on request.
Source
* PSNC data centre, heat exchanger Volume
</td> </tr>
<tr>
<td>
</td>
<td>
• Size of data to be calculated depending on data required in trials
Data and file formats
* Mostly key-value pairs for various parameters
Purpose
* Analysis of real large-scale data centre operation and its thermal flexibility (heat re-use capacity)
</td> </tr>
<tr>
<td>
Standards and metadata
</td>
<td>
Data is identified by a related timeframe and typical metrics such energy
consumption, etc.
</td> </tr>
<tr>
<td>
Data sharing
</td>
<td>
Data from PSNC data centre cannot be shared outside the consortium.
Currently are stored in file-based and relational databases at PSNC.
</td> </tr>
<tr>
<td>
Archiving and preservation
</td>
<td>
Measurements from the PSNC data centre are stored at PSNC (coming from the
BMS).
This data set can provide insights for development of CATALYST framework and
exploitation plans.
</td> </tr> </table>
_PSNC-PUT heat exchange_
<table>
<tr>
<th>
Project name
</th>
<th>
CATALYST
</th> </tr>
<tr>
<td>
ID
</td>
<td>
768739
</td> </tr>
<tr>
<td>
Project coordinator
</td>
<td>
Diego Arnone
</td> </tr>
<tr>
<td>
Project description
</td>
<td>
Converting DCs into Energy Flexibility Ecosystems
</td> </tr>
<tr>
<td>
Funder
</td>
<td>
H2020-EE-2016-2017
</td> </tr>
<tr>
<td>
Principal researcher
</td>
<td>
Ariel Oleksiak
</td> </tr>
<tr>
<td>
Collecting period
</td>
<td>
Selected periods from October 2017 to September 2020
</td> </tr>
<tr>
<td>
Trial ID
</td>
<td>
PSNC_DC_TC_2
</td> </tr>
<tr>
<td>
Data set reference and name
</td>
<td>
_[TrialID/PSNC/DC]_
Heat exchange capabilities and requirements between PSNC and PUT
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Data based on the PSNC data canter heat production (see _PSNC data centre_
paragraph above) and from 5 heating seasons at PUT Type of data:
* Values of required (also for various temperature conditions) and ordered heat
* Supply/return water temperature for PUT
buildings
* Energy costs
* PSNC-PUT heat transfer Installation schema and costs
Source
* Poznan University of Technology (PUT) BMS system
Volume
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
* Depending on period
Data and file formats
* Key-value pairs (from BMS)
Purpose
Analysis of trials evaluating heat re-use capabilities using data centre heat
</th> </tr>
<tr>
<td>
Standards and metadata
</td>
<td>
Data is identified by a related timeframe. Other metadata will include
information about data centre type and components as defined in Section 4.1
</td> </tr>
<tr>
<td>
Data sharing
</td>
<td>
Data from PUT and PSNC data centre cannot be shared outside the consortium.
</td> </tr>
<tr>
<td>
Archiving and preservation
</td>
<td>
Measurements from PUT are stored at PUT. Measurements from the PSNC data
centre are stored at PSNC (in file-based and relational databases).
</td> </tr> </table>
Pilot 3: SBP Co-location data centre
<table>
<tr>
<th>
</th>
<th>
Project name
</th>
<th>
CATALYST
</th>
<th>
</th> </tr>
<tr>
<th>
ID
</th>
<th>
768739
</th> </tr>
<tr>
<th>
Project coordinator
</th>
<th>
Diego Arnone
</th> </tr>
<tr>
<th>
Project description
</th>
<th>
Converting DCs into Energy Flexibility Ecosystems
</th> </tr>
<tr>
<th>
Funder
</th>
<th>
H2020-EE-2016-2017
</th> </tr>
<tr>
<th>
Principal researchers
</th>
<th>
Erwin Out / Paul Corneth
</th> </tr>
<tr>
<th>
Collecting period
</th>
<th>
December 2018 - August 2019
</th> </tr>
<tr>
<th>
Trial ID’s
</th>
<th>
SBP_DC_TC_1: DC Waste heat reuse for office heating;
SBP_DC_TC_2: Utility waste heat reuse for pre-heating DRUPS generator;
SBP_DC_TC_3: Comparison between static and dynamic
UPS systems;
SBP_DC_TC_4: River water DC free cooling;
SBP_DC_TC_5: Grid power flexibility – DC switch-off from grid;
SBP_DC_TC_6: IT load migration QRN – SBP DC
</th> </tr>
<tr>
<th>
Data set reference and name
</th>
<th>
What data are collected and how are they named?
* Type of infrastructure/system (colocation), • Purpose of infrastructure (no physical test),
* Identifier: n.a.
</th> </tr>
<tr>
<th>
Data set description
</th>
<th>
Description of the data, the origin, nature and scale and for what purpose
they were generated.
Type of data, e.g.
* Power consumption calculation;
* Natural gas consumption calculation;
* Environmental data: temperatures in server rooms, offices: design parameters;
* IT monitoring: n.a.
* Energy market: prices of energy (power, natural gas)
Source
* Origin: calculations (SBP_DC_TC_1, SBP_DC_TC_2, SBP_DC_TC_4), study (SBP_DC_TC_3), Building
</th> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
Management System (SBP_DC_TC_5); measurements (SBP_DC_TC_5, SBP_DC_TC_6
[optionally])
* Schematics (SBP_DC_TC_1, SBP_DC_TC_4, SBP_DC_TC_5);
Volume
* No measurement data (SBP_DC_TC_1,
SBP_DC_TC_2, SBP_DC_TC_3, SBP_DC_TC_4);
* Limited calculation data (SBP_DC_TC_1, SBP_DC_TC_2, SBP_DC_TC_4);
* Basic measurement data, limited volume
(SBP_DC_TC_5, SBP_DC_TC_6 [optionally])
Data and file formats
* Word, Excel;
</th> </tr>
<tr>
<td>
Standards and metadata
</td>
<td>
Data standards, metadata sets: Metadata to identify data set ▪ N.a.
More advanced metadata to (possibly to be synchronised with a CATALYST data
model)
• N.a.
</td> </tr>
<tr>
<td>
Data sharing
</td>
<td>
Shared data:
* Pilot part reports including attachments;
* Calculation results, limited volumes (attachment to reports) (SBP_DC_TC_1, SBP_DC_TC_2,
SBP_DC_TC_4); Data protection:
* Measured data (data only for internal CATALYST
project use, no life streaming);
* Data from SBP data centre cannot be shared outside the consortium.
</td> </tr>
<tr>
<td>
Archiving and preservation
</td>
<td>
Archiving and data preservation:
* CATALYST website;
Which data should/needs to be preserved? • Pilot part report plus attachments;
* Recreation options:
* no restrictions, based on Amsterdam annual average environmental data (SBP_DC_TC_1); operation parameters (SBP_DC_TC_2);
* restrictions on river water temperatures, performances
* Useful for others: Systems principles, schematics (SBP_DC_TC_1, SBP_DC_TC_2, SBP_DC_TC_4), Grid power flexibility agreement (SBP_DC_TC_5), Grid power switch-off test and communication protocol (SBP_DC_TC_5), IT load migration system
(SBP_DC_TC_6, optionally)
Preservation period:
* CATALYST website availability; What is the final size of data
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
* T.b.d., standard report size, limited amount of calculation / measurement data, estimation: < 0,5 Gb;
Storage location:
* CATALYST website;
Storage costs / cost allocation:
* Overall CATALYST project;
</th> </tr> </table>
Pilot 4: QRN Distributed HPC
<table>
<tr>
<th>
Project name
</th>
<th>
CATALYST
</th> </tr>
<tr>
<td>
ID
</td>
<td>
768739
</td> </tr>
<tr>
<td>
Project coordinator
</td>
<td>
Diego Arnone
</td> </tr>
<tr>
<td>
Project description
</td>
<td>
Converting DCs into Energy Flexibility Ecosystems
</td> </tr>
<tr>
<td>
Funder
</td>
<td>
H2020-EE-2016-2017
</td> </tr>
<tr>
<td>
Principal researcher
</td>
<td>
Nicolas SAINTHERANT
</td> </tr>
<tr>
<td>
Collecting period
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
Trial ID
</td>
<td>
QRN_DC_TC_1
QRN_DC_TC_2
</td> </tr>
<tr>
<td>
Data set reference and name
</td>
<td>
_[TrialID/Qarnot/DistributedDC]_
• QRN distributed cloud HPC data centre Real production housing building
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Measurements are used for platform management and scheduling needs, according
to the size of the building we can get data from 20 to 300 servers. Such
installation is simply connected to local regular energy grid.
Type of data:
* Power usage of the servers
* Temperatures (targets, dissipators, room)
* IT monitoring: CPU utilization (%) of servers,
* Global site metrics: Storage utilisation (%), Network utilisation (%), total power consumption of the building (if we are connected to smart power counters)
* Energy market: prices of energy (almost
static)
* Internet bandwidth, latency, and global
quality
* Cap on the total energy we are allowed to use, or the building is allowed to use (almost static)
Source
* One housing building (TBD) Volume
▪ TBD Depends on the time frequency…
Data and file formats
• TBD
</td> </tr>
<tr>
<td>
Standards and metadata
</td>
<td>
Data is identified by a related timeframe. Within CATALYST Trial ID will be
used to identify data sets.
</td> </tr>
<tr>
<td>
</td>
<td>
Other metadata will be rather limited as location and other details are
sensible data.
</td> </tr>
<tr>
<td>
Data sharing
</td>
<td>
Data from QRN distributed DC cannot be shared asis, primarily for user privacy
reasons. It encloses personal data and as such should be treated cautiously,
even if these data cannot be tied to one person or household.
The data sets will be shared only within the CATALYST project members upon
request.
About data protection, this data sets will be accessible to authorized person
at Qarnot, mainly researchers working on the CATALYST project.
</td> </tr>
<tr>
<td>
Archiving and preservation
</td>
<td>
Similar data sets are stored in data centre, the CATALYST trial data set will
most probably be stored in a regular data centre as well. Again, the end
volume will depend on many parameters, but especially the frequency and the
time frame.
The storage costs will be covered by Qarnot as regular data sets storage. The
data sets will be kept during all the project duration and during 3 years
after at least. Depending on further use of this data, data protection and
storage policy may evolve after the project.
</td> </tr> </table>
### Non-pilots data sets
Simulation data coming from tools and studies performed within CATALYST
project are summarized below.
Prediction and optimisation of DC flexibility
<table>
<tr>
<th>
</th>
<th>
Project name
</th>
<th>
CATALYST
</th> </tr>
<tr>
<th>
ID
</th>
<th>
768739
</th> </tr>
<tr>
<th>
Project coordinator
</th>
<th>
Diego Arnone
</th> </tr>
<tr>
<th>
Project description
</th>
<th>
Converting DCs into Energy Flexibility Ecosystems
</th> </tr>
<tr>
<th>
Funder
</th>
<th>
H2020-EE-2016-2017
</th> </tr>
<tr>
<th>
Principal researcher
</th>
<th>
Tudor Cioara
</th> </tr>
<tr>
<th>
Collecting period
</th>
<th>
The data set will be generated during the development and evaluation of
CATALYST DC Energy Flexibility Prediction & Optimization Components (in
relation with WP4)
</th> </tr>
<tr>
<th>
Trial ID
</th>
<th>
N/A
</th> </tr>
<tr>
<th>
Data set reference and name
</th>
<th>
_[Flexibility &OptimizationDataSet] _
The data set will include simulated data related to the electrical and thermal
flexibility estimation in various situations as well as the optimization
actions for flexibility shifting to offer different services.
</th> </tr>
<tr>
<th>
Data set description
</th>
<th>
Type of data:
Over the course of the project, flexibility data will be generated in various
input contexts using WP4 defined models taking into account the hardware
configuration of the CATALYST pilot DCs. Specifically:
</th> </tr>
<tr>
<td>
</td>
<td>
Minimum and maximum electrical energy flexibility availability for each
individual DC component and aggregated;
Maximum thermal flexibility availability for the server room;
Also the optimization actions taken by the energy optimizer to provide
specific flexibility services.
Source
* Simulated. Volume
* Hundreds of MB, depending on how long tests and experimentation will last.
Data and file formats
* The data will be stored in a database; from which they can be easily exported as a Raw Data file or as a set of INSERT statements that recreate the records in the tables. Data will be also exportable as CSV files that can be used to recreate the records in the tables in other Optimizer instances, regardless of the underlying database infrastructure.
</td> </tr>
<tr>
<td>
Standards and metadata
</td>
<td>
The relational database stores and provides access not only to the data but
also to metadata in a structure called data dictionary or system catalogue. It
holds information about tables, columns, data types, constraints, table
relationships, and many more.
</td> </tr>
<tr>
<td>
Data sharing
</td>
<td>
The simulated data can be shared only for the CATALYST experimentation
objective and in the scope of the project. Access to data generated during the
experimentation period is subject to authorization control and permission
rights granted per user group.
</td> </tr>
<tr>
<td>
Archiving and preservation
</td>
<td>
Data will be archived and preserved in the trial DCs during the Project and
for a period of 1 year after the end of the Project.
</td> </tr> </table>
<table>
<tr>
<th>
Project name
</th>
<th>
CATALYST
</th> </tr>
<tr>
<td>
ID
</td>
<td>
768739
</td> </tr>
<tr>
<td>
Project coordinator
</td>
<td>
Diego Arnone
</td> </tr>
<tr>
<td>
Project description
</td>
<td>
Converting DCs into Energy Flexibility Ecosystems
</td> </tr>
<tr>
<td>
Funder
</td>
<td>
H2020-EE-2016-2017
</td> </tr>
<tr>
<td>
Principal researcher
</td>
<td>
Ariel Oleksiak
</td> </tr>
<tr>
<td>
Collecting period
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Trial ID
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Data set reference and name
</td>
<td>
_[SimulationID/CFD]_
• CFD simulations of a server room
</td> </tr> </table>
CFD simulations of a server room
<table>
<tr>
<th>
Data set description
</th>
<th>
The data set consists transient CFD simulations data from scenarios calculated
for the needs of ML algorithm. It covers 3125 different scenarios of an
airflow and heat transfer in simplified server room. Scenarios are varied with
5 key parameters: initial internal server room temperature, air conditioner
volumetric flow rate, air conditioner outlet temperature and power consumption
of both racks present in virtual room. Snapshots of simulations results are
collected in 30-second simulated time intervals (the total simulated time
interval is 10 minutes). Aside from complete volumetric results, records from
specific domain points (virtual probes) are saved in 1-second simulated time
intervals. In the data set several physical variables are stored, such as
temperature, velocity, pressure and parameters related to turbulence
modelling.
Type of data:
* temperature;
* velocity;
* pressure;
* turbulence parameters;
Source
* CFD simulations made with PSNC’s framework dedicated for server rooms analyses based on OpenFOAM software.
Volume
* Size of all the volumetric data is around 500GB
* The data collected by virtual probes is in the order of 1 GB.
Data and file formats
* The data is stored in native OpenFOAM format (volumetric data: binary files; probe data: plain ASCII files).
Purpose
▪ Analysis of thermal processes and cooling in a server room; Composition of
training data set for ML algorithm
</th> </tr>
<tr>
<td>
Standards and metadata
</td>
<td>
Data is described by a definition of models using standard formats used for
geometry models and simulation data. The specific instance of data set is
identified by set of parameters for boundary conditions and timestamp values.
</td> </tr>
<tr>
<td>
Data sharing
</td>
<td>
The simulation data are considered to be shared through the LABEE portal and a
public repository.
</td> </tr>
<tr>
<td>
Archiving and preservation
</td>
<td>
Currently all the dataset is stored at the PSNC Eagle cluster. At present, the
time of storing is not limited. The data is protected with periodic backups.
All the configuration files of all scenarios are also stored, preserving the
possibility to continue calculations for longer time period. Sharing via local
and public repositories is planned.
</td> </tr> </table>
### Social data sets
_GDC-SG data set_
Data set related to the GDC-SG.
<table>
<tr>
<th>
Project name
</th>
<th>
CATALYST
</th> </tr>
<tr>
<td>
ID
</td>
<td>
768739
</td> </tr>
<tr>
<td>
Project coordinator
</td>
<td>
Diego Arnone
</td> </tr>
<tr>
<td>
Project description
</td>
<td>
Converting DCs into Energy Flexibility Ecosystems
</td> </tr>
<tr>
<td>
Funder
</td>
<td>
H2020-EE-2016-2017
</td> </tr>
<tr>
<td>
Principal researcher
</td>
<td>
John Booth (GIT)
</td> </tr>
<tr>
<td>
Collecting period
</td>
<td>
During the project’s lifetime
</td> </tr>
<tr>
<td>
Trial ID
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Data set reference and name
</td>
<td>
▪ Information on members of the Green Data Centre –
Stakeholder Group (GDC-SG) and the project’s Advisory Board. The latter is
essentially a subset of the former.
• This data set is related to Task 8.5
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Type of data, e.g.
* Personal info such as, photo, name, job description and affiliation, contact info (such as email), possible social media profile info on LinkedIn and Twitter.
* Their activity and participation to GDC-SG meetings, events and related tasks. This includes for example meeting minutes, event proceedings, photos, presentations and conference calls records.
Source
* The members interested in the GDC-SG
Volume
* Depends on the number of members, related events and meetings.
Data and file formats
* As applicable, for example an excel file for the members directory, meeting minutes in doc or pdf format.
</td> </tr>
<tr>
<td>
Standards and metadata
</td>
<td>
TBD at a later stage once we obtain more information and experience on the
GDC-SG membership and operations
</td> </tr>
<tr>
<td>
Data sharing
</td>
<td>
Which data will be shared?
* The directory of GDC-SG members will always be public
* Private meeting minutes will be shared only within the GDC-SG
* Public events proceedings, presentations, videos and such will be public Data protection
</td> </tr>
<tr>
<td>
</td>
<td>
▪ The whole consortium will always have access to these data.
</td> </tr>
<tr>
<td>
Archiving and preservation
</td>
<td>
After the completion of the project, the GDC-SG will be responsible for
storing and sharing the data
</td> </tr> </table>
### Market data sets
Simulated Marketplace related data
<table>
<tr>
<th>
Project name
</th>
<th>
CATALYST
</th> </tr>
<tr>
<td>
ID
</td>
<td>
768739
</td> </tr>
<tr>
<td>
Project coordinator
</td>
<td>
Diego Arnone
</td> </tr>
<tr>
<td>
Project description
</td>
<td>
Converting DCs into Energy Flexibility Ecosystems
</td> </tr>
<tr>
<td>
Funder
</td>
<td>
H2020-EE-2016-2017
</td> </tr>
<tr>
<td>
Principal researcher
</td>
<td>
Diego Arnone
</td> </tr>
<tr>
<td>
Collecting period
</td>
<td>
The data set will be generated during the CATALYST Marketplace implementation
phases and during the integration phases from M7 (Apr 2018) to M36 (Sept
2020).
</td> </tr>
<tr>
<td>
Trial ID
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Data set reference and name
</td>
<td>
_[CATALYSTMarketplace]_
* The data set will include _simulated_ data related to the four variants of the CATALYST Marketplace:
Electricity, Flexibility, Heat/Cold, IT Load.
Identifier:
* CATALYSTMarketplace/Electricity
* CATALYSTMarketplace/Flexibility
* CATALYSTMarketplace/Heat/Cold
* CATALYSTMarketplace/IT Load
</td> </tr>
<tr>
<td>
Data set
description
</td>
<td>
Simulated Marketplace related data Type of data:
• Over the course of the project, CATALYST Marketplace data (market
participants, market sessions, bids, offers, market sessions results) will be
generated in a simulated way and entered into a relational database.
Specifically, the following data will be generated:
o General information about energy prices, marketplaces, market sessions,
market actions, transactions, invoices
</td> </tr>
<tr>
<td>
</td>
<td>
o
</td>
<td>
Personal data about market participants, market operator, aggregators, DSO(s)
</td> </tr>
<tr>
<td>
</td>
<td>
o
Source
</td>
<td>
User credentials, hashed using PBKDF2 algorithm (only the hashed value is
preserved)
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
* Simulated.
Volume
* Hundreds of MB, depending on how long tests and experimentation will last.
Data and file formats
* The data will be stored in a Postgres database, from which they can be easily exported as a Raw Datafile or as a set of INSERT statements that recreate the records in the tables.
* Data will be also exportable as Django dumps exported as plain text JSON files that can be used to recreate the records in the tables in other Marketplace instances, regardless of the underlying database infrastructure.
</td> </tr>
<tr>
<td>
Standards metadata
</td>
<td>
and
</td>
<td>
The relational database store and provide access not only to the data but also
to metadata in a structure called data dictionary or system catalogue. It
holds information about tables, columns, data types, constraints, table
relationships, and many more.
</td> </tr>
<tr>
<td>
Data sharing
</td>
<td>
</td>
<td>
Simulated Marketplace data can be shared only for the CATALYST experimentation
objective and in the scope of the project.
Access to marketplace data generated during the marketplace instance operation
is subject to authorization control and permission rights granted per user
group, when attempted both programmatically and through the marketplace user
interface. User credentials are hashed using PBKDF2 algorithm.
</td> </tr>
<tr>
<td>
Archiving and preservation
</td>
<td>
</td>
<td>
Data will be archived and preserved in the trial DCs during the Project and
for a period of 1 year after the end of the Project.
</td> </tr> </table>
# Conclusions
This document provides the CATALYST Data Management Plan. It delivers a
template for data descriptions with definitions of possible data sets and
along with data management methodology and data protection measures.
The report provides overall description of the sources delivering the data
within the CATALYST project including security and preservation data. It
summarizes the availability of the data during the project lifetime and after
the end of the project including the possibility of sharing the data in
external repositories accessible via OpenAIRE. Finally detailed description of
particular data sets (including newly defined ones) is given.
This is the final version of the DMP, however, during realisation of trials,
some updates of data sets depending on configurations, results of experiments
and external data availability may appear. Further information on how specific
data sets are processed in the project along with results from trials will be
provided in D7.4.
Significant part of the project data is for internal use as it comes from
critical infrastructure or contains sensible data, however, some data sets are
planned to be shared publicly and preserved after the end of the project.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1325_NEXT-NET_768884.md
|
**Executive Summary**
This deliverable describes the NEXT-NET Data Management Plan. This deliverable
outlines how data will be handled during the project life.
The data collection, sharing and storing process is described in this
document, following a methodology aligned with the H2020 guidelines on data
management. In addition data security and personal data protection questions
are contained in this deliverable.
# 1 Introduction
NEXT-NET Project Data Management Plan (DMP) describes the data management life
cycle for the data to be collected, processed and/or generated during and
after the end of the project execution.
DMP includes information about the type of data to be collected, handled or
generated research data during and after the end of the Project. This
deliverable includes the methodology and standards to be used, regulation
about open data and how the data will be preserved even after the end of the
Project.
NEXT-NET project participates in the Pilot on Open Research Data launched by
the European Commission along with the Horizon 2020 programme. Therefore, all
data produced by the project can potentially be published with open access –
though this objective will obviously need to be balanced with the other
principles described below.
# 2 Data Summary
The ultimate goal of NEXT-NET project is to develop a European Strategic
Research Agenda and action plan for the Supply Chains in 2030. These results
will be disseminated during and after the project execution period to all
potential stakeholders: process, manufacturing and logistics industry,
academia, policy makers, users and others.
Data collection / generation in this project are enablers for executing it
according to the Description of Action (DOA). The content of this DMP has been
adapted to the required development of the project across the different phases
and needs.
Figure 1 – Data Management Process
## 2.1 Data Collection, Sharing and Storing
Data collection is the process of gathering information that will be used
during project execution.
Throughout the project lifetime, personal data will be also collected,
obtained in the framework of different workshops and events organized to
gather primary data. Indeed, NEXT-NET follows an interaction approach; using
methods based on participation and shared views of experts and non-experts,
like workshops, multi-criteria methods, and stakeholder analysis.
Secondary data will also be collected during the project execution period (for
instance analyzing existing roadmaps as input to analyze current trends,
research gaps, etc.). These data jointly with the primary data will help shape
the future scenarios for the supply chains in 2030.
Additional data can come from a creativity approach, using methods relying
heavily on the inventiveness and ingenuity of the very skilled individuals,
like the use of wildcards, science fiction, scenario writing. Also methods
based on codified information, data and indicators etc. like literature
review, scanning, benchmarking will be utilized. Likewise access to existing
expertise will be realized using methods based on tacit knowledge of people
with privileged access to relevant information or with accumulated knowledge
like expert panel, quantitative scenarios Delphi and roadmapping. Lastly, data
can originate from project partners background.
Personal data will also be gathered through NEXT-NET Social Networks and
Website, as they provide data from visitors, contact forms and users of
Twitter, LinkedIn also Research Gate in any case.
The data sharing process will be arranged mainly using the project intranet
(based on Innovation Place), but also through mail exchanges among the people
involved. Public documents will be shared in NEXT-NET website. Also the open
access procedure will apply for scientific publications that may derive from
the project.
Data will be stored at Innovation Place, Website, and Social Networks.
Additionally personal data will be stored at project partners’ servers and
CRM.
# 3 FAIR Data
### 3.1.1 Making data findable, including provisions for metadata
Deliverables will be named with easily findable and self-explanatory tittles
to facilitate their location & identification. The information will be
uploaded to NEXT-NET intranet. Public Public information and deliverables will
be accessible at the publications section in the website. The project intranet
(based on Innovation Place) will contain information restricted to registered
users and consortium members.
### 3.1.2 Making data openly accessible and interoperable
During the project development, all the data included in public deliverables
will be public and all this data will be posted at NEXT-NET website
_https://nextnetproject.eu/publications/_ NEXT-NET project is executed by a
consortium of partners located at different countries so certain sensitive
data may be stored internally in the partners’ servers following their own
internal criteria, always complying with the regulations in place.
NEXT-NET website contains a repository for public documents and publications
as a result of the project execution. It also contains a private access
repository (Innovation Place), where non-public data is stored.
Figure 2 – Private Area in NEXT-NET website
Entering the private area menu leads to
_https://intranet.nextnetproject.eu/login_ where the intranet can be accessed
with a personal user and password:
Figure 3 – NEXT-NET intranet login area
Project information will be accessible from all devices (computer, laptop,
smart phone, tablets, etc.). Documents will be handed preferably in pdf.
format or any other one that allows access to the majority of users.
### 3.1.3 Data re-use
Public technical deliverables are defined in the Description of Action (DOA)
and the data included there can be reused according to current and future
needs.
Potential stakeholders to reuse data from NEXT-NET deliverables are experts
involved along the project activities, members of NEXT-NET of Advisory Board
(representing the process, manufacturing and logistics industry), policy
makers, industrial companies and other research institutions as input for
their future research.
# 4 Allocation of resources
According to the open access to publications obligations In Horizon 2020,
NEXT-NET project partners must ensure open access to all peer reviewed
scientific publications relating to the project results. NEXT-NET partners can
choose from one of the following routes:
* Self-archiving, e.g. archiving a published article or the final peer-reviewed in an online repository before, alongside or after its publication.
* Open access publishing, e.g. placing the publication in open access mode (on the publisher or journal website). In this case, article processing charges will be allocated to the other direct costs category in the partners’ budget.
Additionally, NEXT-NET intranet has been located in PNO owned platform (called
Innovation Place) free of charge for all project partners. Therefore PNO will
be responsible for data management in the project intranet. Likewise, ZLC
manages NEXT-NET social networks and public website, including the contact
form area. People registering to the contact form will be part of the NEXT-NET
mailing list. This list will be used only for the purpose of the project.
Therefore ZLC will be responsible for data management in those sites. Hosting
and maintenance of all these resources will be allocated to NEXT-NET budget.
# 5 Data security
Project data will be allocated at each project partner server, and generally
at the project Intranet. As the final result of the project will be in the
form of public deliverables no sensitive data will be generated, shared or
stored.
In any case, a holistic security approach will be undertaken to protect the 3
mains pillars of information security: confidentiality, integrity, and
availability. The security approach will consist of a methodical assessment of
security risks followed by an impact analysis. This analysis will be performed
on the personal information and data processed.
# 6 Ethical Aspects
## 6.1 PERSONAL DATA PROTECTION
The data protection regulation will be carried out under the General
Regulation of Data Protection (GDPR) (Control (EU) 2016/679), the European
regulation through which the European Parliament, the Council of the European
Union and the European Commission intend to strengthen and unify upwards the
data protection for all the countries of the European Union (EU), also
controlling the transfer of data outside the Union. Its main objectives are to
return citizens to control over their personal information and to unify the
regulatory framework for multinationals. When it enters into force, the GDPR
will replace the 1995/46 / EC Data Protection Directive 1995.
Adopted in April 2016, it is expected to enter into force on May 25, 2018,
after a transition period of two years, and, unlike directives, it does not
require incorporation into national legislation, being directly applicable.
In order to comply with this regulation, the following disclaimer appears in
NEXT-NET site when registering through the contact form to the mailing list
and newsletter:
Figure 6 – NEXT-NET contact form disclaimer
# 7 Conclusion
This deliverable describes NEXT-NET data management plan. It shows how data
will be collected, shared and stored during the life of the project. It also
describes the process to ensure FAIR data (findable, accessible, interoperable
and reusable).
As a summary, NEXT-NET project approach has been to make public data available
in the project website and storing all additional data in a common repository
located at the project intranet. No sensitive data are expected to be
collected and therefore there will not be an issue on this topic.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1327_OSCCAR_768947.md
|
Executive Summary
**1 EXECUTIVE SUMMARY**
This deliverable is the first version of the Data Management Plan (DMP) for
OSCCAR project. OSCCAR project participates in the H2020 Open Research Data
Pilot (ORDP) [1], which means that an open access to research data is
provided. The data management within OSCCAR project follows the data
protection and privacy issues in digital communication which are governed by
two EU directives. The 95/46/EC directive [3] protects individuals regarding
personal data (processing) and on the free movement of such data. The
2002/58/EC [4] directive specifies the requirements on data protection and
privacy in digital communications.
The deliverable follows the guidelines for F.A.I.R Data Management set by the
EC H2020 [2], to submit a first version of OSCCAR Data Management Plan (DMP)
within the first 6 months of the project. Management of datasets containing
personal information will be compliant with the General Data Protection
Regulation (GDPR) [13].
The deliverable presents how research data is collected or generated to reach
the project goals, how data is stored, made openly accessible and re-usable.
It clarifies the responsibilities for data management and presents details on
data security.
It is encouraged to make existing data available for research within OSCCAR
project. This data will be generated and colleted especially in WP1
Determination of future accident scenarios, WP2 Development of advanced
occupant protection principles and WP3 Human Body Models for assessment of new
safety systems in future vehicles which are described in detail.
The DMP is a living document and will be updated continuously, whenever
significant changes arise. However, an updated version will be available in
month 16 as deliverable “D7.4 Data Management update”.
_Keywords:_ Data Management Plan, FAIR data, Data security, Data re-use
2. **DESCRIPTION OF WORK**
**2.1 Data Summary**
Data being further processed in the OSCCAR project will not contain personal
or commercially sensitive information. The final results, like future accident
research data from WP1 e.g, the user study and sled tests in WP2, data on
tissue and volunteer tests in WP3 and in the stakeholder group in WP6, may
contain references to practices in the OSCCAR system database, the
demonstrator evaluation and validation. These data will be processed at a high
level of abstraction, devoid of any personal or commercially-sensitive data
before sharing within the consortium and publishing.
The management of knowledge, research data, data access and intellectual
property rights are delicate issues, especially in close-to-market R&D
projects with a large number of OEMs and suppliers working closely together.
The handling with this sensitive data is regulated in the OSCCAR Consortium
Agreement [5] and Grant Agreement [9]. Some issues regarding data protection,
access and security of data have also already been part of the OSCCAR Grant
Agreement, Description of Action (DoA) [9].
For reasons of competition (competition law), detailed data on cost and sales
prices of specific components and systems WILL NOT be shared.
In Table 1 an overview is presented of the data that will be generated in the
project, the standards that will be used, how this data will be exploited
and/or shared/made accessible for verification and re-use and how this data
will be curated and preserved.
<table>
<tr>
<th>
Generated data
</th>
<th>
Way of exploitation of data and/or sharing/making accessible of data for
verification and re-use
</th>
<th>
Way of curation and/or preservation of data
</th> </tr>
<tr>
<td>
openPASS simulation software modules, configuration artefacts and simulation
results
</td>
<td>
Develop procedures and documentation of accident scenario simulation
methodology based on openPASS; the aim is to provide the relevant software
modules in the openPASS environment, i. e. git repositories of Eclipse, so
data might be re-generated by re-using the modules.
</td>
<td>
Deliverables describing scenarios and simulation based assessment publicly
available on the OSCCAR website; open source code and OSCCAR specific
configuration artefacts (scenario models) generated in OSCCAR committed e.g.
to
Eclipse git repositories
</td> </tr>
<tr>
<td>
OpenSCENARIO format
descriptions
</td>
<td>
Description of approach how to use OpenSCENARIO in the OSCCAR toolchain (link
between openPASS and CAE tools); provide examples of scenarios
</td>
<td>
Publicly available (on the OSCCAR website) ;
Communication with XOSC project to incorporate changes/additions
</td> </tr>
<tr>
<td>
Generated data
</td>
<td>
Way of exploitation of data and/or sharing/making accessible of data for
verification and re-use
</td>
<td>
Way of curation and/or preservation of data
</td> </tr>
<tr>
<td>
Procedures for hardware testing for component and sled testing
</td>
<td>
Convey enhanced testing procedures with respect to new components related to
future interior concepts to the international standardization community.
</td>
<td>
Communication with standardization agencies + Publicly available on the OSCCAR
website.
</td> </tr>
<tr>
<td>
Data generated within the WP2 user study
</td>
<td>
Generation of knowledge in respect to user opinion and input for selection of
testcases to be persued within OSCCAR
</td>
<td>
Publication of main results
</td> </tr>
<tr>
<td>
Biomechanical data analysis within WP3
</td>
<td>
Development and
enhancement of HBMs
characteristics. Description and publication of implementation methods
</td>
<td>
Validation data and methods to be published.
Respective deliverables publicly available on the
OSCCAR website
</td> </tr>
<tr>
<td>
Simulation and assessment procedures for HBM application
</td>
<td>
Convey adapted procedures
for virtual testing to the international standardization community.
</td>
<td>
Communication with standardization agencies +
Publicly available on the
OSCCAR website
</td> </tr> </table>
**Table 1 Overview of data generated in the project**
In the Consortium Agreement [1] signed by all partners the following aspects
regarding data management are addressed and regulated: Background, Results,
Access Rights, Publications.
#### 2.1.1 Data types and formats
The type and format research data are created depending on how researchers
choose to collect and analyse data and depends on discipline-specific-
standards and customs. Ensuring long-term usability of the OSCCAR data
requires consideration of the most appropriate type and file formats.
Within research activities of OSCCAR project it is intended to use different
types and formats of data. To reach the project goals an intensive review and
coordinated analysis of past and newly available volunteer tests data will be
necessary. These data sets are from different formats and types:
* Photos, videos, spreadsheets, mechanicals measurement data (force, displacement, trajectories etc), electromyography (EMG) measurement data (muscle activity,), For the data exchange of research results:
* Word documents, Excel documents, PowerPoint documents, pdf, documents containing simulation code, datasets
For the documentation of results:
* Deliverables (Word documents)
* Publications (pdf) Personal data:
* Contact list (internal use only)
* Personal data for user study. Dataset on empirical sample describing ergonomic aspects of user behavior in a simulated automated driving situation. The handling of the data is regulated in the information sheet, the consent informed form and the form for withdrawal of participation. No sensitive data will be collected. In the consent informed form volunteers for the study get informed about what date will be collected (e.g. audio/visual/text/survey). In the information sheet the assurance of anonymity and confidentiality is defined. Besides it details how the data collected will be used, what happens to data and results at the end of the research [14].
The following rules are followed:
* Date will be stored under pseudonym. Data will be stored electronically and on paper. Only researchers authorized by a signed form, has access to data. The data could be accessed by any third part only in an anonymous way.
* ID – related information will be kept in separate databases from other information types in order to ensure that no personal data can be obtained without the proper authorization.
* No sensitive data will be collected. o The personal data will be stored for the duration of the project only, if collected at all.
* De-identified data will be deposited or submitted to an open source online research data repository at the end of the study. This data may be used for future research.
#### 2.1.2 Re-use of existing data
It is encouraged to make existing data available for research within OSCCAR
project. This data will be generated and colleted especially in WP3 Human Body
Models (HBM) for assessment of new safety systems in future vehicles. An
overview of the data in the according tasks is shown below:
* Task 3.1.1 Existing experimental data from (CHALM; DAI and LMU)
* Task 3.1.2 Existing data: Chalmers/JARI will contribute with MRI (Magnet Resonance Imaging) data of seated volunteers. Such data will be used to further understand the position of the pelvis and adjacent inner organs.
* Task 3.1.3 Additional data on pelvis to seat interaction will be made available and will enable the use of HBMs to predict human in-crash response in future sitting positions. Existing PMHS neck response data have been identified and plans for data exchange established.
* Task 3.2.1 Project partners will provide additional volunteer data: Task 3.2.1b. These are among other parts of the OM4IS 1 (Occupant Models for Integrated Safety project) test series, the AHBM 2 (Active Human Body Model Project) test series, AHBM 3 test series and parts of the Precooni 1 data. To make this data available and to enable validation of morphed active HBMs, i.e. models made to be representative of the population diversity, additional data analysis is required.
Intensive review and coordinated analysis of past and newly available
volunteer tests will be used.
* OM4IS ½ (Improved Predictability research project) tests results/manoeuvres incl.
previously unpublished female volunteer data to be made available in detail.
These are braking, lane change and combined manoeuvres.
* Precooni (K2 research project) data & results to be made available in detail. This is a small, low-g sled test series with volunteers under laboratory conditions.
* SAFER (SAFER Consortium ) data: AHBM 2: Response corridors for vehicle kinematics, muscle activation, interactions forces, and volunteer kinematics will be provided. AHBM3: Response corridors for vehicle kinematics, muscle activation, interactions forces, seat pressure and volunteer kinematics will be provided. Analysis of female data and data for volunteers when in the driver’s seat are just to be started.
* Daimler Driving Study to be made available in detail (Unpublished volunteer test – Focus on driver: steering, braking, combined manoeuvre – male & female )
* Published and soon to be published data from TME data to be made available in detail (owned together with IFSTAR Institute- (Institut français des sciences et technologies des transports, de l’aménagement et des reseaux)
* Existing volunteer data owned by TASS/ Siemens company will be provided.
* Analysis of PRISM (EU Project- Proposed reduction of car crash injuries through improved smart restraint development technologies project) data (standard seating posture/driver).
During the course of the project, the OSCCAR team may gain access to data that
was collected before the start of the project and by an organisation that is
not a member of the consortium. In this event, the OSCCAR partner who receives
this data must ensure that there is no information contained in it, which
could be used to identify individual citizens. Further, the OSCCAR partner
must be mindful of the risks of linking this data, or conclusions resulting
from this data with data or conclusions from other data sources. Informed
consent must be obtained when acquiring preexisting data from external
sources. [9]
#### 2.1.3 Origin of data
Suitable datasets to being used for validation/tuning for human body models
come from OSCCAR partners and from initiatives and projects in which the
needed data has been collected, see [7]. Data sets provided to OSCCAR will be
assessed a-priori for their suitability by the respective partner. Only
technically and legally suitable data will be used within OSCCAR.
#### 2.1.4 Expected size of data
To be evaluated during the course of the project. The expected size to be
handled depends on the extent and the nature of the data that are made
available. Datasets including high resolution and or high speed video tend to
be relatively large, as are simulation datasets that include full vehicles and
HBMs in pre- and in-crash simulation. Several GBs of data can be expected for
a single set. Therefor only stripped sets will be made available.
#### 2.1.5 Data utility
* Automotive Industry
* OSCCAR consortium
* OSCCAR associated/ international research partners
* All Industry and the large research community dealing with virtual testing and or biomechanics.
* European Commission services and European Agencies
* Austrian institutions and bodies of the EU
* The general public including the broader scientific community
**2.2 FAIR data**
#### 2.2.1 Making data findable
With respect to the guidedlines on FAIR Data Management in Horizon 2020 [11]
research data should be “FAIR” that is findable, accessible, interoperable and
re-usable. OSCCAR project will undertake all necessary action to make its data
FAIR.
The primary responsibility for storing and making data findable lies with the
data creator. However, all data created within OSCCAR project will be stored
in one central archive – OSCCAR Projectplace [10].
#### 2.2.2 Naming conventions
A structured data storage is essential for proper and secure storage of data
files and records. For any file-based storage this includes clear and
unambiguous file naming, the use of proper versioning, clear and intiuitive
folder structure.
The naming conventions within OSCCAR project are described in the Project
Handbook [8]. The following conventions are foreseen for all project documents
(presentations, meeting minutes, deliverables, reports…)
* DATE (only if required)
* date of creation (format: yyyymmdd)
* TITLE
* Short description of the document, please use 7-bit ASCII characters (a..z, 0..9, _, -) only (DO NOT use special characters, e.g. “:” )
* VERSION (only if required)
* vX-Y … use position X for a major release, and position Y for a minor release (e.g. v0-1, v0-2, v1-0, v1-1, v2-0 …)
* Avoid uploading files with version numbers in their name on Projectplace – use the versioning system instead.
* FILEEXTENSION
* according to the type of the file (docx, pdf, …)
##### 2.2.2.1 Presentations
{DATE_}OSCCAR_TITLE{_VERSION}.FILEEXTENSION e.g.
20150623_OSCCAR_WP3_Overview_v1-0.ppt
e.g. OSCCAR_Overview.pdf
##### 2.2.2.2 Meeting minutes
DATE_OSCCAR_MoM_TITLE{_VERSION}.FILEEXTENSION e.g.
20140323_OSCCAR_MoM_CoreTeamWebex.pdf
e.g. 20140323_OSCCAR_MoM_CoreTeamWebex_v1-0.doc
##### 2.2.2.3 Deliverables
These naming conventions are relevant for the deliverable creation and
deliverable review process. The submission of deliverables to the European
Commission is exclusively handled by the co-ordinator.
• OSCCAR_D_DELIVERABLENUMBER.FILEEXTENSION
e.g. OSCCAR_D_6.1.doc
e.g. OSCCAR_D_6.2.pdf
#### 2.2.3 Search keywords for re-use
In each deliverable keywords to describe the main content have to be included
within the Executive Summary. It is possible to search for documents on
Projectplace with a search function. All documents, files and folders in which
the searchword appears are displayed.
#### 2.2.4 Version numbers
The version control for documents will be performed on Projectplace. It uses
version numbers in the format of _x_ . _y_ . Here, _x_ is the major version
and _y_ the minor version number. Once the document is checked in after
finishing the modifications, the version number is automatically updated. The
version number _within_ the document must also be updated. This has to be done
manually.
In case a new version is created by checking in a document on Projectplace, it
is required to attach a comment which describes what has changed within the
new version.
The following guidelines apply for using major and minor versions:
* Minor version: incremented for changes to draft versions of a document (prior to release).
* Major version: incremented when a reviewed version of the document is released. A released version will always have a version of the form _x_ .0. The major version number of a draft document may also be incremented if there are very significant changes.
If possible, version numbers should NOT be used in the filename. Instead, the
versioning system of Projectplace should be used. (see [8])
#### 2.2.5 Metadata
Currently it is not foreseen to create new metadata within OSCCAR. In case
metadata will be needed later on, it will be outlined here what type of
metadata will be created and how.
**2.3 Open access data**
#### 2.3.1 Open available data
Following the guidelines of the H2020 regulations defined for the Open
Research Data Pilot (ORDP) [1] , all data collected or produced within the
OSCCAR project consortium will be by default open. Some sensitive data may be
shared only under restrictions (e.g. within the consortium only).
To share data with the OSCCAR consortium, a repository has been set up: OSCCAR
Projectplace [10]. It provides access to project data and is also a platform
to view and share files easily. The access to data can be restricted
individually.
This repository is hosted by the project coordinator (VIF), who aims to reach
the highest level of General Data Protection Regulation (GDPR) [6] compliancy
amongst others by:
* Applying strict policy in granting and revoking access to the data
* Logging of user identity during data access, download and upload, including version control. This enables to restore the availablity and access to the data in a timely manner in the event of a physical or technical incident
Deliverables with the distribution level “Public” as well as open access
publications will be uploaded on OSCCAR Website [12] where open access is
granted for all interested parties. Other open access data will be accessible
through open access repositories.
#### 2.3.2 Accessibility of data
To share data and make it accessible with OSCCAR consortium partners, a
project repository has been set-up by VIF and is available [6].As described
above public deliverables and publications will be available on OSCCAR Website
as well as in according repositories.
According to H2020 ORDP [1] open access to scientific publication is
obligatory, as well as open access to research data, where opt-outs are
possible. Beneficiaries will therefore deposit an electronic copy of the
published version or final peer-reviewed manuscript accepted for publication
in a repository for scientific publication – this ensures a long-term
preservation. After that beneficiaries mus ensure open acces to those
publication via the chosen repository. They can chose a repository but must
ensure open access for most 6 months.
As described in the ORDP research data generated in the project will be as
open as possible and as closed as necessary. Open accessible research data
will be stored in a research data repository. As far as possible measures to
enable third parties to access, mine, exploit, reproduce and disseminate (free
of charge for any user) research data will be taken [1] [17].
#### 2.3.3 Deposition of data
As regulated in the ORDP beneficiaries will also provide open acces, through
the repository, to the biblographic metadata that identify the deposited
publication [1] [17]. As suggested by the EC OpenAIRE [18] / Zenodo [19] have
been chosen as such depositories.
Arrangements with the certified repository for open access data will be done
as soon as appropriate data is available.
#### 2.3.4 Restrictions on use
The access to the OSCCAR repository will be provided by the project
coordinator VIF. Access is only granted to project members. Restrictions to
any folders are possible. Restrictions on use are defined in the Grant
Agreement [9] and in the Consortium Agreement [5].
Open available data (public deliverables, publications, research results) will
be available on OSCCAR Website [12] and on according depositories like
described in section 2.3.2 .
**2.3.5 Data access committee**
There is no need for a data access committee.
#### 2.3.6 Conditions for access
As suggested by ORDP within OSCCAR project authors are encouraged to retain
their copyright and grant adequate licenses to publishers [1] [17].
#### 2.3.7 Access management
Data on an open access depository like openAIRE manage access to the
respective data.
In Projectplace, logging of user identity during data access, download and
upload, including version control. This enables to restore the availability
and access to the data in a timely manner in the event of a physical or
technical incident.
**2.4 Data interoperability**
OSCCAR project knows that common data and metadata standards and formats are a
key aspect for data operability. Standardisation makes data discoverable and
this way promotes international and interdisciplinary access to, and use of,
research data. To ensure correct and proper use of the OSCCAR data by the
owners and re- users, the use of standardized vocabularies and ontologies is
also necessary.
Data exchange and re-use (between researchers, institutions, organisations,
countries) is provided for publications and research data in open access
repositories like claimed in ORDP.
#### 2.4.1 Data and metadata vocabularies, standards or methodologies
Standardisation on data level will be performed by applying community-based
standards as used in
peer reviewed publications and conferences and ISO standards, like ESV,
Journal of Crashworthiniess, Ircobi, etc.
Standard and common vocabularies will be used in all types of data to be
published by OSCCAR. Where additional explanation is necessary, it will be
provided.
#### 2.4.2 Licensing
Currently data or software licencing is foreseen to be of FOSS (free open
source software ) type to be published accordingly. (e.g. GPL v3).
#### 2.4.3 Reusability of data
As soon as data and results are ready to be made available, they will be
published and / or uploaded for open access. This section will be detailed
throughout the course of the project.
**2.4.4 Useability of data by third parties after the end of the project?**
In Table 1 an overview of the data generated in the project, how this data
will be exploited and/or shared/made accessible for verification and re-use
and how this data will be curated and preserved. Project results and outputs
will be published on OSCCAR website [12] and open repositories accessible for
verification and re-use (see [9], section 2.2.4). The website will remain
online at least until five years after project end.
OSCCAR project supports the concept of FAIR data, and will work toward making
research data FAIR. The decision about long-term provision will be taken as
the data are stored: open access data (e.g. public deliverables, publications)
will be made FAIR as long as possible.
Most research data of OSCCAR project will be open access as soon as the
research has been completed and published. There are no plans to end provision
of OSCCAR data, and they will therefore be available for re-use as long as the
archives exists. The data produced in the course of OSCCAR project will be re-
usable for as long as the information they contain are relevant.
#### 2.4.5 Data quality assurance processes
An initial quality control is needed at the local level and early in the
collection process. The initial control of the data, during data collection,
is the primary responsibility of the data creator/owner, who must ensure that
the data reflect the actual facts, responses, observations and events. The
consortium agreement details the publication quality control process, that is
also applied to ensure data quality.
In deliverable D8.2 [15] the data protection management system from the
coordinator VIF is described. To ensure the protection of personal data, the
data protection management system of the coordinator VIF or where required of
the data collecting partner organization will be applied.
**2.5 Allocation of resources**
#### 2.5.1 Costs for making data FAIR
Costs for establishing and maintaining the OSCCAR data repository are covered
by the coordinator, VIF. While the repository will not be maintained after the
end of the project, all files stored in the repository shall be stored after
the project to meet the requirements of good scientific practice. A strategy
for storage of the files after the project is being developed and will be
included in the follow-up DMP in month 16.
Resources for long-term preservation of datasets will be ensured by its
storage in the repositories. The costs for storing data will be born locally.
The costs for making publications open access can vary from 500 to 5000 euro,
depending on the journal. This will be covered by the
company/institution of the author(s). Some funding resources are already
dedicated to this. (see other direct costs)
If eligible, and planned for, costs related to open access publication will be
covered by the H2020 Grant.
#### 2.5.2 Responsibilities of data management
The project coordinator has organized a well structured data repository on
Projectplace [10].
As project coordinator VIF is responsible for:
* Initial set-up of the data repository and upgrading when neeed
* Maintenance of the data repository: definition, creation, updating of the data repository structure, i.e.: structure of folders and subfolders, names, contents and access, upload, download rights
* Perform security assessment on a regular basis in order to guarantee the agreed security level
* Reporting and blocking any possible security threat, taking appropriate measures accordingly
* Collecting users requests for access to and download of data in the repository
VIF is not responsible of the interruption of the data repository services
that are due to force majeure.
The quality control of the data, during data collection, is the primary
responsibility of the OSCCAR partners (as data providers).
In general, the aim is to use European and international repositories. In
addition to this, all OSCCAR data will also be stored on OSCCAR Projectplace
[10].
The primary responsibility for back-up and recovery of the data also lies with
the OSCCAR partners, and for the data stored on Projectplace this data lies
with VIF.
* ‘Data creation’ refers to the act of creating new data or acquiring existing data which is new to the project (for example by obtaining existing datasets for use in the project).
* If a consortium partner is the creator of data (e.g. by performing data collection or tests), the partner is responsible for properly storing, processing and sharing that data, and ensuring that it does not contain personal data before being shared in the consortium.
* If a consortium partner wishes to use information from a test, but is not the creator (e.g. by acquiring relevant datasets or relevant documentation), the partner is responsible for determining the source of the data, and assessing if the dataset contains personal or otherwise privacy- or commercial-compromising or sensitive data. If that is the case, it is the responsibility of the consortium partner to purge personal etc. data from that dataset and prepare it for further dissemination in a proper admissible form. [9]
#### 2.5.3 Resources for long term preservation
The OSCCAR consortium, i.e. its partners and the Executive Board, board
discusses all items related to publication of data, that also concerns details
like time, timing, place and necessary resources if applicable.
**2.6 Data security**
#### 2.6.1 Provisions for data security
All shared, processed and operational data will be stored in secure
environments at the locations of consortium partners with access privileges
restricted to the relevant project partners. If (processed) data is to be
transferred from one partner to another, the transfer needs to be done
securely, for example via a secure data channel, in an encrypted mode or via
physical transfer. [9].
OSCCAR project will undertake all required efforts needed to protect the data,
products and service against unauthorized use. The primary responsibility to
take necessary measures to ensure data security lies with the partners, and
once stored on the data repository OSCCAR Projectplace [10] the security
provisions come from there.
OSCCAR project will undertake all efforts required to provide secure access to
data. Where applicable, authentification systems are used, requesting log-in
before providing access to secured data and information. Furthermore, OSCCAR
project will take measure to be compliant with the EU regulations regarding
the protection of personal data [13].
OSCCAR project promotes a culture of openness and sharing of data and will
therefore stimulate the exchange of good practices in data access and sharing
by liaising with existing European initiatives.
#### 2.6.2 Repositories for long term preservation and curation
VIF provides a workspace (Projectplace) for the project users where necessary.
The Projectplace environment is hosted in Stockholm, Sweden and ISO-27001 and
SOC2/SSAE16 certified. [20]
As suggested by the EC OpenAIRE [18] and Zenodo [19] have been chosen as
depositories for open access publications.
**2.7 Ethical aspects**
### 2.7.1 Ethical or legal issues on data sharing
This section is to be covered in the context of the ethics review, ethics
section of DoA and ethic deliverables. Ethics and legal issues are covered
within the OSCCAR Grant Agreement [9] and the OSCCAR Consortium Agreement [5].
Ethics is also covered in a separate deliverable D8.2 [15], which describes
principles of handling personal data and the data protection procedure within
OSCCAR project. The aim of this deliverable was to outline a OSCCAR specific
data protection procedure. It serves to document how the data protection
requirements in OSCCAR are met. To ensure the protection of personal data, the
Data Protection Management System of the coordinator VIF will be applied. The
goals are the establishment and maintenance of the regulations from GDPR [6]
and DSG 2018 [16]; decreasing the likelihood for potential data protection
violations as well as the continuous improvement of the data security
management system (DSMS)
Subsequently, all collected personal data and the corresponding work packages
and activities are defined. It is described what personal data has been
collected and how the collected data will be managed, protected, and
preserved. No sensitive personal data, which is defined by EU GDPR [6], will
be collected or will be necessary for the project. Even though there is no
collection of sensitive personal data, the project still collects personal
data.
There are no other ethical or legal issues that can have an impact on data
sharing. Ethics and legal situation in general is constantly changing and
therefor monitored. OSCCAR project wil adapt accordingly.
### 2.7.2 Informed consent for data sharing
A questionnaire that includes consent for data sharing and long-term
preservation dealing with personal data is available in D8.1 [14]. Templates
of the informed consent/assent forms and information sheets (in language and
terms intelligible to the participants) will be kept on file and submitted on
request. (see D8.1 [14])
## 2.8 Other issues
Currently no other issues of interest are identified. During the course of
OSCCAR, additional aspects will be added here if necessary.
References
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1329_GreenCharge_769016.md
|
# Executive Summary
This deliverable presents the first version of the GreenCharge Data Management
Plan (DMP) and describes:
* **The guiding principles** for data management in the project
* **The legal framework** constituted by the General Data Protection Directive (GDPR)
* **Data Summary** : Overview of what data will be gathered and processed in the project
* How data will be stored and processed according to the H2020 **FAIR Data Management principles** , making data: finable, accessible, interoperable, and reusable.
* **Resource allocation** : The costs of making data FAIR in this project
* **Data Security** : How we intend to keep the data secure
* **Ethical aspects** : A summary of the ethics and privacy strategy in GreenCharge
The purpose of the DMP is to contribute to good data handling through
indicating what research data the project expects to generate and describe
which parts of the data that can be shared with the public. Furthermore, it
gives instructions on naming conventions, metadata structure, storing of the
research data and how to make public data available.
During the 36 active months of the project, a SharePoint site will be used as
the online working and collaboration platform. SharePoint is only accessible
to project participants and can provide further access control through
establishing folders and sub-sites with stricter access granted than to the
main site. During the project all _anonymised_ (public) datasets will be
uploaded to this site and stored in accordance with the ethics and privacy
strategy of GreenCharge. _Non-anonymised_ datasets will be stored locally by
the designated Data Controllers for the three pilot sites, and not shared or
distributed in any way to others. Metadata will be added to all datasets, and
instructions on how to upload research data is provided.
GreenCharge will use the open research data repository _Zenodo_ to comply with
the Horizon 2020 Open Access Mandate. This mandate applies to the underlying
research data of publications, but beneficiaries can also voluntarily make
other datasets open. In GreenCharge, all deliverables, publications and the
anonymous parts of the underlying datasets will be uploaded to the _H2020
GreenCharge Community_ as well as the _European Commission Funded Research
(OpenAIRE) Community_ in Zenodo. Uploads will be done upon approval of the
deliverables by the European Commission, upon publication or acceptance of
scientific publications, or, for underlying datasets, at the end of the
project at the latest. Each dataset will be given a persistent identifier
(Digital Object Identifier, DOI), supplied with relevant metadata and linked
to the project name and grant agreement number. Publications and underlying
research data will be linked to a Creative Commons license which regulate
reuse. Data security arrangements are defined for the SharePoint site and
Zenodo. Ethical aspects related to data collection, generation and sharing
have been considered and nothing in this project shall be deemed to require a
party to breach any mandatory statutory law under which the party is
operating. This includes any national or European regulations, rules and norms
regarding ethics in conducting research.
The DMP is a living document and will be updated at the end of the project to
reflect the actual research data generated during the project and include
updated instructions for how to access open data. Day-to-day data management
and monitoring will be done using an online list in the SharePoint site that
will be continuously updated to reflect actual data generation. The
maintenance of this list is the responsibility of the Project Coordinator,
supported by the Data Controllers, the task leader for Task 5.3, and the Work
Package (WP) leader of WP9 Ethics.
# 1 About this Deliverable
## Why would I want to read this deliverable?
It provides an easy overview of research data the project is expected to
generate, the types and formats of this data, and how this data is processed
and stored to make them findable, accessible, interoperable and reusable,
according to the principles of FAIR data management. The purpose of the DMP is
to contribute to good data handling during the project's lifetime, and to
describe how such data will be curated and preserved.
## Intended readership/users
Internally in the project:
* All project participants who are responsible for, or in any way involved with, data collection and data handling can use this document, in addition to deliverable 9.1 POPD Requirement no. 1 describing the project's ethics and privacy strategy, for instructions on how to handle, store and process data.
* All project participants can use this document to get an overview of all data collected in the project and how this is processed and stored.
External audience:
* **Section 3, 4** : All relevant stakeholders who are interested in GreenCharge related activities and research topics can use this document to get an overview of the data collected in the project, how to access this data, and, if applicable, how to re-use this data in their own activities.
* **Section 2, 4, 6, 7:** All persons who voluntarily participate in the pilots and contribute data to the project can use this document to learn how the project processes and store their data.
## Other project deliverables that may be of interest
* **Deliverable 5.1 Evaluation Design for business model and technology prototype evaluation** and **Deliverable 6.1 Stakeholder acceptance Evaluation Methodology:** Describes the methodology for data collection.
* **Deliverables 2.4 Implementation plan for Oslo pilot / 2.10 Implementation plan for Bremen pilot / 2.17 Implementation plan for Barcelona pilot:** Describe the methods for data collection in the pilots.
* **Deliverables 4.1 Initial Architecture Design and Interoperability Specifications / 4.2 Final Architecture Design and Interoperability Specifications:** Describe the automated data collection and add detail to the technical measures for protection of data (secure storage, access control, etc.) and specifies functionality required by the GDPR (ability to see data, delete data, etc.).
* **Deliverables 5.2 Simulation and Visualisation tools** (initial version) / **5.3** **Simulation and Visualisation tools** (final version) / **5.4 Intermediate Results for Innovation Effects Evaluation** / **5.5** **Final Results for Innovation Effects Evaluation / 6.2 Data collection and Evaluation Tools** , and **6.3 Intermediate evaluation results for stakeholder acceptance analysis / 6.4 Stakeholder acceptance evaluation and recommendations** : These deliverables will all use the data collected and present the results of analysis of this data.
* **D8.2 Dissemination and exploitation plan** (V1) **/ 8.3 Dissemination and Exploitation Plan** (V2): These deliverables describe the plans for dissemination and exploitation of results and the completed and planned communication activities, and will help achieve widespread knowledge of where and how research data from the project can be accessed for reuse.
# 2 Introduction
## Guiding principle
The guiding principle of the GreenCharge project is to be an _open_ project,
with 48 out of 55 deliverables in the project being publicly available. Of the
7 that are not public, 2 are administrative reports and the remaining 5 are
initial plans or prototypes. Figure 1 illustrates the main procedure used in
the project to ensure open access to research data and publications.
**Manual data**
**collection**
**Automated data**
**collection**
**Research data**
**Anonymised**
**data**
**Data that**
**cannot be**
**anonymised**
**Data Repository**
**Publications**
**Technology & **
**Business**
**Model**
**development/**
**improvement**
**Gold Open Access**
**Deliverables**
**Green Open Access**
**Public**
**Confidential**
**Open**
**Protect**
**Project**
**internal:**
**Access**
**controlled**
**Contact**
**information**
**Protect**
(
patent, license,
etc.)
To protect the privacy of individual participants in the pilots, only data
that can be irreversibly anonymised to the degree that it is impossible to
identify individuals will be shared publicly. Non-anonymised data will be kept
internally in the project and used as input to project work, but never shared
publicly in its original format. Both the anonymised and non-anonymised data
will, in an aggregated format, feed into project work and provide basis for
analysis in deliverables and scientific publications. If the editor of a
deliverable is concerned that their deliverable contains personal information,
they request a separate screening for privacy and ethics issues before
submission to be sure that no personal data is included. The leader of WP9
Ethics is responsible for performing these screenings. Public deliverables,
publications and anonymised datasets will be shared openly through an open
research data repository (see section 3.5).
During the lifetime of the project, partners might discover business
opportunities based on the project's results that can lead to commercial
exploitation. This will be monitored by the Innovation Manager, and if cases
arise appropriate steps to protect such results for exploitation purposes will
be taken. As Figure 1 shows, data underlying such results will not be openly
shared.
## Legal Framework
As of May 2018, the General Data Protection Regulation (GDPR) 1 is
applicable in all Member States in the European Union, as well as in the
countries in the European Economic Area (EEA). GDPR updates and modernises
existing laws on data protection to strengthen citizens' fundamental rights
and guarantee their privacy in the digital age.
GDPR regulates the processing by an individual, a company or an organisation
of personal data relating to individuals in the EU 2 . It does not apply to
the processing of personal data of deceased persons or of legal entities. It
sets down one set of data protection rules for all companies and organisations
operating anywhere in the EU and European Economic Area (EEA), for two main
reasons: 1) to give people more control over their personal data, 2) level the
playing field for businesses and organisations operating in the EU and EEA.
GDPR grant individuals a set of rights that must be protected by any actor who
processes personal data. The individual rights include the right to:
* information about the processing of your personal data;
* obtain access to the personal data held about you;
* ask for incorrect, inaccurate or incomplete personal data to be corrected;
* request that personal data be erased when it's no longer needed or if processing it is unlawful;
* object to the processing of your personal data for marketing purposes or on grounds relating to your particular situation;
* request the restriction of the processing of your personal data in specific cases;
* receive your personal data in a machine-readable format and send it to another controller ("data portability"); and
* request that decisions based on automated processing concerning you or significantly affecting you and based on your personal data are made by natural persons, not only by computers. You can also have the right in this case to express your point of view and to contest the decision.
## Permissions for collecting and handling personal data
All data collected from stakeholders in the project will be done in accordance
with applicable ethical standards and requirements in the respective countries
of the data collection, as well processed and handled securely and in line
with applicable rules and regulations on privacy and data protection.
The national Data Protection Authorities (DPA) in the countries where data
collection will be performed (Norway, Germany, Spain) have been notified of
the project activities and the project has received approvals from these to
collect and process personal data 3 . In accordance with these approvals,
the following partners will have the role as _Data Controllers_ in the
project, one for each pilots site:
* Oslo pilot: SINTEF AS (SINTEF)
* Bremen pilot: Personal Mobility Center Nordwest eG (PMC)
* Barcelona pilot: Fundacio Eurecat (EUT)
Only parts of the data collected by GreenCharge will be personal data (see
section 3.2). In case other partners require access to process personal data,
they must request this from the Data Controllers, and if granted a data
processing agreement will be set up between the Data Controller and the
partner requesting access.
All personal data will be collected only upon receiving informed consent from
the participants, and any participant providing personal data can at any time
withdraw their participation and related data from the project. The privacy,
ethics and procedures for obtaining informed consent are described in detail
in deliverable 9.1, and a short summary is provided in section 7.
Before any publications (e.g. scientific papers, public deliverables) with the
potential of containing personal data is released to the public, it will go
through an ethics and privacy screening to ensure that all data included is
anonymised, aggregated and/or analysed in such a way as to ensure that none of
the content can be traced back to an individual participant or respondent. The
leader of WP9 Ethics is responsible for these checks.
# 3 Data Summary
Appendix A, on page 28, provides a list of all datasets currently expected to
be generated in the GreenCharge project and their planned accessibility. We
recognise that this list will develop and grow as the project evolves.
## Purpose of data collection and generation
The overall motivation for data collection in GreenCharge is to facilitate
evaluations and learning. GreenCharge will empower cities and municipalities
to make the transition to zero emission/sustainable mobility with innovative
business models, technologies and guidelines for cost efficient and successful
deployment and operation of charging infrastructure for Electric Vehicles
(EV). This involves evaluating the proposed solutions in pilots to test their
effect, as well as gathering input and feedback from citizens and users in
order to improve solutions, increase user acceptance, encourage behavioural
change, and achieve more optimal energy use in electric mobility.
Only data that is needed to perform project activities will be collected, and
as far as possible, participants will not be asked to provide personal data
unless this is necessary (see section 3.2).
## Data types, formats and size
**Types of data**
Some of the data the project will collect and generate is classified as
personal data, such as names, IP address, residence of participants, and car
license and registration numbers for EV owners. This data must be irreversibly
anonymised before being made public. If such data cannot be irreversibly
anonymised, it will remain confidential and only managed by designated Data
Controllers in the project. If a partner who is not a Data Controller needs
access to process data (Data Processer), an assessment of this will be done by
the Data Controller and if granted a specific Data Processing Agreement will
be set up between the Data Controller and the partner requesting access. A
template for Data Processing Agreement is provided in appendix B. Non-
anonymous data, although not openly shared in the project or beyond, can still
provide input to deliverables and publications, but only analysis of the
aggregated data, which cannot be linked to individual participants, will be
made public.
The data collected in GreenCharge can be split into the following three
categories (details on datasets provided in appendix A):
1. _Manually collected data:_
1. Data on Electric Vehicle (EV) charging behaviour (e.g. frequency of charging, willingness to share energy from vehicle batteries)
2. Data on energy use (e.g. information on energy use in general and willingness to share energy and change behaviour)
3. Data on user acceptance
4. Demographic data (e.g. age, gender, residence, ownership of EV)
5. Car ownership information (e.g. car licence/registration number, customer registration number for EV charging)
6. Pictures, audio and video (from pilots and workshops)
2. _Data automatically collected through technology:_
1. Data on Electric Vehicle (EV) charging (e.g. location of charging point, energy amount, time of charging)
2. Data on energy use in general in housing associations (e.g. energy profiles of common facilities at Røverkollen housing cooperative in Oslo)
3. _Contact Information_
1. Project partner representatives
2. Project external individuals who voluntarily participate in project and pilot activities Data will be organised in datasets relating to the category of the data and site of collection.
**Data formats**
A dataset can include different types of formats. As an example, a manually
collected dataset concerning user acceptance can consist of both written
interview notes, audio files from interviews, pictures from pilot sites, and
survey responses. Some of this data cannot be anonymise within the scope of
this project (e.g. audio files), so in most cases only parts of a dataset can
be made openly available. Concerning the automatically collected data the
project expects to deliver these datasets anonymised as open research data.
GreenCharge will only use widely accepted formats for data generation, such
as:
* Documents/Reports/Publications: .PDF/A, txt, doc/docx
* Spreadsheets: .xls/.xslx
* Databases: .cvs
* Audio files. .mp3, .wav, .wma, .ra
* Pictures: jpg, png
* Video: avi, flv, mov, mp4, wmv
## Origin of data
GreenCharge will collect data at three main locations: Oslo, Bremen and
Barcelona. Depending on the type of data, there will be various methods and
origins of data collection involved at each site.
For manually collected data the main origins will be:
* Interviews with groups and individual participants in the pilots at each site
* Feedback from participants at stakeholder workshops
* Survey responses
* Market survey
* Literature study/review and open data (re-use of existing data) For automatically collected data the main origins will be:
* Automated data collection at EV charging points located at a pilot site
* Mobile phone application voluntarily downloaded by participants
## SharePoint and metadata provision
All anonymised datasets will be stored in a SINTEF SharePoint project site.
This will be the project's online collaboration platform during the project
lifetime, and for up to 4 months after the end of the project for final
closing activities (see section 4.4.2). Data Controllers at each pilot site
will be responsible for uploading their public datasets to SharePoint. All
datasets will use standard SharePoint version control.
The non-anonymous datasets will be stored locally by the Data Controllers and
not shared with others, with the exception of project generated contact lists
which will be stored in a strict access-controlled SharePoint folder. More
details on how contact data will be handled is provided in section 7.2.7.
The following list describes the metadata that will be provided for each
dataset:
* File name • WP (Work Package) number
* Date • Responsible person
* Version • Lead partner / Data Controller
* File type • Dissemination level
* Description
## Zenodo
GreenCharge will use the open research data repository _Zenodo_ to comply with
the H2020 Open Access Mandate 4 . All scientific publications, including
public deliverables and public parts of underlying datasets will be uploaded
to the _H2020 GreenCharge Community_ 5 in addition to the _European
Commission Funded Research (OpenAIRE) Community_ 6 in Zenodo.
Zenodo is a "catch-all" open research data repository which gathers research
data across all disciplinary fields. It is for non-military purposes only, and
the repository is hosted and managed by CERN. All data deposited to Zenodo is
stored securely in the CERN Data Centre's cloud infrastructure 7 (see
section 6.2).
## Instructions for uploading datasets to SharePoint
Table 2 and Figure 2 details the instructions to project participant on how to
upload datasets to SharePoint and Zenodo.
**Table 2: Instructions for uploading datasets**
<table>
<tr>
<th>
**Upload instructions - GreenCharge Sharepoint Site**
</th> </tr>
<tr>
<td>
* Please upload all public datasets to this folder in the GreenCharge Sharepoint site: o _Research Data_
▪ There is one sub-folder per pilot site
* Use this naming convention (for details see 4.1.4): o _Descriptive text H2020_Acronym_DeliverableNumber_UniqueDataNumber_ o _Descriptive text H2020_Acronym_PublicationNumber_UniqueDataNumber_
* Be sure to use the same file name when uploading later versions
* Register mandatory metadata on your data set by adding a new item to this list, located in the same folder. This list will also generate a Unique Data Number for your dataset:
o _GreenCharge Research Data_
</td> </tr>
<tr>
<td>
**Upload instructions - Zenodo**
</td> </tr>
<tr>
<td>
* Scientific publications, public deliverables and public datasets must also be uploaded to the __H2020_ _ __GreenCharge Community_ _ _**AND** _ the __European Commission Funded Research (OpenAIRE)_ _ __Community_ _ in Zenodo. To do this you must complete the following steps:
* Create a profile in Zenodo to be able to upload files o Click on the GreenCharge link above, or search for " _H2020 GreenCharge_ " under the
"Communities" tab at the top of the Zenodo site o On the Community site, click
the green "New upload" button in the top right corner o Enter requested data
and confirm the upload. The information requested is located in the metadata
list on SharePoint ( _GreenCharge Research Data_ )
* Remember to add the European Commission community in the box labelled "communities". You can use the search function to locate the community and add it. The data will then automatically be uploaded to both communities, so you don't have to do it twice.
* Uploading should be done as soon as possible and at the latest on article publication. Data Controllers are responsible for uploading datasets generated by them.
</td> </tr> </table>
according to the Consortium Agreement, and access given on fair and reasonable
terms.
The research leading to these results has received funding from Horizon 2020,
the European Union's 15 of 44 Framework Programme for Research and Innovation
(H2020) under grant agreement n° 769016.
# 4 FAIR Data Management
GreenCharge will manage data in accordance with the principles of **FAIR data
management** 7 (Findable, Accessible, Interoperable and Re-usable data) The
project aims to maximise access to, and re-use of research data generated by
the project. At the same time, there are datasets, or parts of datasets,
generated in this project that cannot be shared in order to protect the
privacy of voluntary participants in the pilots. Appendix A provides a current
overview on the datasets GreenCharge expects to generate and their
accessibility.
## Making data findable
### 4.1.1 The H2020 GreenCharge Community in Zenodo
GreenCharge will use the Zenodo repository as the main tool to make our
research data findable in accordance with the H2020 Open Access Mandate.
A _H2020 GreenCharge 9 _ community has been established on the Zenodo
website, and the project will upload all our public datasets and deliverables
as well as scientific publications to this community. In addition, we will
link all our uploads to the _European Commision Funded Research (OpenAIRE)_
community for maximum findability. All uploads will be enriched with standard
Zenodo metadata, including Grant Number and Project Acronym. Zenodo provides
version control and assigns DOIs to all uploaded elements.
### 4.1.2 Metadata in Zenodo
Metadata associated with each published data set in Zenodo will by default be
as follows:
* Digital Object Identifiers
* Version numbers
* Bibliographic information
* Keywords
* Abstract/description
* Associated project and community
* Associated publications and reports
* Grant information
* Access and licensing info
* Language
In addition, we will add the project name and GA number
### 4.1.3 Approach to search keywords
The Data Controllers at each pilot site will be responsible for uploading
public datasets that they have generated and to assign specific keywords
relevant to these datasets. Dataset specific keywords must be descriptive to
the content of the dataset. E.g., a dataset containing information on EV user
acceptance should be tagged with corresponding keywords such as, " _EV user
acceptance_ ". In addition, the project has defined a set of general keywords
that should apply to all public datasets, scientific publications and public
deliverables. These are as follow:
* Electric Mobility • Energy Smart Neighbourhoods
* E-mobility • Sharing Economy
* Zero Emission Transport
### 4.1.4 Naming conventions
Datasets will be named using the following naming conventions:
_DS_PilotCode_DataCategoryNr_DataController_Description_H2020_Acronym_UniqueDataNr_
Explanation of the naming convention:
* "DS" stands for dataset
* The pilot site identification codes are as follows:
* Oslo: OSL o Bremen: BRE o Barcelona: BCN
* "DataCategoryNr" refers to the list of data categories described in section 3.2:
* 1 = Manually collected data o 2 = Automatically collected data o 3 = Contact information
* "DataController" refers to the short name of the partner/Data Controller who is responsible for the dataset. GreenCharge has three Data Controllers, one for each pilot site (see section 2.3 for overview).
* "Description" refers to a _short_ description of the content of the dataset (see example)
* "UniqueDataNr" is the number automatically generated by the research metadata list in SharePoint (see section 3.6).
**Example of dataset name:** _DS_2_OSL_SINTEF_EV-
ChargingProfiles_H2020_GreenCharge_0001_
### 4.1.5 Versioning
Zenodo provides DOI versioning of all datasets uploaded to their communities,
which allows us to edit and update the uploaded datasets after they have been
published. This also allows us to cite specific versions of an upload and cite
all versions of an upload. As an example, DOI versioning of an uploaded
software package that is released in two versions can look like this 8 :
* v1.0 (specific version): 10.5281/zenodo.60943
* v1.1 (specific version): 10.5281/zenodo.800648
* Concept (all versions): 10.5281/zenodo.705645
The first two DOIs for versions v1.0 and v.1.1 represent the specific versions
of the software. The last DOI represents all the versions of the given
software package, i.e. the concept of the software package and the ensemble of
versions. They are therefore also referred to as _Version DOIs_ and _Concept
DOIs_ , but technically they are both normal DOIs.
This does not, however, mean that you will receive a new DOI each time you
edit the metadata related to your upload (e.g. change the title of a file or
dataset). A new DOI version will only be created if you update the actual
files you have uploaded.
## Making data accessible
The H2020 Open Access Mandate aims to make research data generated by H2020
projects accessible with as few restrictions as possible, but also accept
protection of personal or sensitive data due to privacy concerns and/or
commercial or security reasons.
All public datasets, scientific publications and deliverables will be uploaded
to Zenodo and made openly available, free of charge. Publications and
underlaying data sets will be linked through use of persistent identifiers
(DOI versioning). Data sets with dissemination level "confidential" (non-
anonymous datasets) will not be shared due to privacy concerns. Potentially,
some datasets might be restricted due to protection for commercial
exploitation. If such cases arise during the project, this will be informed in
the final version of the DMP.
Metadata including licences for individual data records as well as record
collections will be harvestable using the OAI-PHM protocol by the record
identifier and the collection name. Metadata is also retrievable through the
public REST API. The data will be available through www.zenodo.org, and hence
accessible using any web browsing application.
The list of expected datasets in appendix A constitutes the first version of
dataset description and we recognise that it will develop and grow as the
project evolves. In addition, some information concerning the datasets remain
unknown at this time, e.g. size of the datasets. An updated version of this
list will be provided at the end of the project. Furthermore, information on
how GreenCharge will handle and process pictures and video is described in
section 7.2.3, and how we handle and process contact information is described
in section 7.2.7.
## Making data interoperable
Zenodo uses JSON schema as the internal representation of metadata and offers
export to other formats such as Dublin Core, MARCXML, BibTeX, CSL, DataCite
and export to Mendeley. The data record metadata will utilise the vocabularies
applied by Zenodo. For certain terms, these refer to open, external
vocabularies, e.g.: license (Open Definition), funders (FundRef) and grants
(OpenAIRE). Reference to any external metadata is done with a resolvable URL.
## Reusable data
The GreenCharge project will enable third parties to access, mine, exploit,
reproduce and disseminate (free of charge for any user) for all _public_ data
sets, and regulate this by using Creative Commons Licences.
### 4.4.1 Recommended Creative Commons (CC) licences
Application of licences will be assessed on a case-by-basis in close
collaboration with the Coordinator, Innovation Manager and partners concerned.
If applicable, GreenCharge will use Creative Commons licences (CC), which are
tools to grant copyright permissions to creative work. As a default, the CC-
BY-SA license will be applied for public GreenCharge data. This license lets
others remix, tweak, and build upon your work even for commercial purposes, as
long as they credit you and license their new creations under the identical
terms. This license is often compared to “copyleft”, free and open source
software licenses. With this licence all new work based on GreenCharge data
and results will carry the same license, so any derivatives will also allow
commercial use. This does not preclude the use of less restrictive licenses as
CC-BY or more restrictive licenses as CC-BY-NC, which does not allow
commercial usage.
### 4.4.2 Longevity of the GreenCharge research datasets
**Public (anonymous) data**
For data published in scientific journals, the underlying data will be made
available no later than by journal publication. The data will be linked to the
publication. Data associated with public deliverables will be shared once the
deliverable has been approved and accepted by the EC. For other public
datasets not directly linked to a scientific publication or deliverable, such
datasets will be made available upon assessment by the Data Controllers that
it is ready for publishing, and in the final month of the project at the
latest.
Open data can be reused in accordance with the Creative Commons licences. Data
classified as confidential will as default not be reusable due to privacy
concerns.
The public data will remain reusable via Zenodo for at least 20 years. This is
currently the lifetime stated by the host laboratory CERN. In the event that
Zenodo has to close their operations, they have provided a guarantee that they
will migrate all content (including metadata) to other suitable repositories.
**Confidential (non-anonymous) data**
All non-anonymous data will be deleted at the end of the project. In case
permission is given by the party providing and owning the data, some non-
anonymous data will be kept for a maximum of 4 months after the contractual
end date of the project 9 . The additional 4 months is to keep the
underlying datasets available to allow the completion of any scientific
publications being prepared towards the end of the project.
An exemption is pictures and videos, taken with consent from voluntary pilot
participants, that are used for communication purposes. If consent is _not_
withdrawn at an earlier time, such data will be kept for up to 4 years after
the end of the project in order to comply with the EC contractual obligation
to continue dissemination and exploitation activities after the project ends.
If a party withdraws the consent to use this material (pictures, videos), it
will be deleted without delay.
**Classification of research outputs**
The process of classifying research outputs from GreenCharge follows the
guidelines provided in the " _H2020_
_Guidance for the classification of information in research projects_ " 12
and will be described in the D1.2 Project Management Handbook.
# 5 Allocation of resources
**Costs**
GreenCharge uses standard tools and a free of charge research data repository.
The costs of data management activities are limited to project management
costs and will be covered by allocated resources in the project budget.
Long-term preservation of the public data is ensured through Zenodo. Other
resources needed to support reuse of data after the project ends will be
solved on a case-by-case basis.
**Data Manager**
The overall responsibility for data management lies with the project
coordinator, Mr. Joe Gorman from SINTEF.
Supporting the coordinator is a data management team consisting of the Data
Controllers for each pilot site (BREMEN, EUT, SINTEF), the leader of WP9 on
ethics (SINTEF), and the task leader of Task 5.3 _Research Data Management_
(SUN).
# 6 Data security
In this chapter, the security features of the research data infrastructure
used to store and handle data in the GreenCharge project are described.
## Data security as specified for SINTEF SharePoint
SINTEF SharePoint is the online collaboration platform used the GreenCharge
project. A dedicated project site has been established on this platform,
accessible only by the partner representatives in the consortium. Furthermore,
a dedicated folder for research datasets is set up, allowing for stricter
access control than the main project site. Only anonymous datasets will be
uploaded to this SharePoint folder.
The GreenCharge Sharepoint site has the following security settings:
* Access level: Restricted to persons (project members only). Further access restrictions on specific folders is enabled.
* Encryption with SSL/TLS protects data transfer between partners and the SINTEF SharePoint site.
* Threat management, security monitoring, and file-/data integrity prevents and/or registers possible manipulation of data.
Documents and elements in the SINTEF SharePoint sites are stored in
Microsoft's cloud solutions, based in Ireland and the Netherlands. There will
be no use of data centres outside EU/EEA (Norway, Iceland and Switzerland) or
in the US.
Nightly back-ups are handled by SINTEF's IT operations contractor. As a
baseline, all project data will be stored for 10 years according to SINTEF's
ICT policy, unless otherwise agreed in contracts and data processing
agreements.
## Data security as specified for Zenodo
The following list describes the security settings for Zenodo:
* Versions: Data files are versioned. Records are not versioned. The uploaded data is archived as a Submission Information Package. Derivatives of data files are generated, but original content is never modified. Records can be retracted from public view; however, the data files and records are preserved.
* Replicas: All data files are stored in the CERN Data Centres, primarily Geneva, with replicas in Budapest. Data files are kept in multiple replicas in a distributed file system, which is backed up to tape on a nightly basis.
* Retention period: Items will be retained for the lifetime of the repository. The host laboratory of Zenodo CERN, has defined a lifetime for the repository of the next 20 years minimum.
* Functional preservation: Zenodo makes no promises of usability and understandability of deposited objects over time.
* File preservation: Data files and metadata are backed up nightly and replicated into multiple copies in the online system.
* Fixity and authenticity: All data files are stored along with an MD5 checksum of the file content.
* Files are regularly checked against their checksums to assure that file content remains constant.
* Succession plans: In case of closure of the repository, a guarantee has been made from Zenodo to migrate all content to suitable alternative institutional and/or subject based repositories.
# 7 Ethical aspects
## Legal aspects
The legal aspects that impact data sharing is described in section 2.2 _Legal
framework_ . The proposed work in GreenCharge will fully comply with the
regulations set out in the GDPR. In addition, GreenCharge comply with the
principles of the European Charter for Researchers, the European Code of
Conduct for Research Integrity, including ethical standards and guidelines,
regardless country in which research is carried out.
Nothing in this project shall be deemed to require a party to breach any
mandatory statutory law under which the party is operating. This includes any
national or European regulations, rules and norms regarding ethics in
conducting research.
The ethical aspects impacting the GreenCharge project are described in detail
in deliverable 9.1 _POPD – Requirement No 1_ and responds to the findings of
the ethics review performed by the European Commission in the proposal
evaluation phase. However, since deliverable 9.1 is a confidential document, a
summary of the ethics and privacy strategy is included in section 7.2. The
template for the information letter and the informed consent form that will be
used when collecting data from voluntary participants in the pilots is
included in appendix C and D.
## Summary of Ethics and Privacy Strategy
### 7.2.1 Commitment to ethical principles
All project partners are obliged by European and national law (GDPR) to
protect personal data.
The coordinator of the GreenCharge project, SINTEF, follow ethical guidelines
in its work, and _all_ work conducted by SINTEF is subject to the SINTEF
Ethics Council and the appointed Ethics Representative. SINTEF will also
ensure that all participants in the GreenCharge project follows the ethical
guidelines of SINTEF. Important aspects with respect to this are:
* The ethical guidelines are based on the vision of using science and technology to create a better society and are reviewed continuously to ensure they stay up to date with developments in society and the challenges of today. They generally fall into these categories: research ethics, business ethics, and ethics in interpersonal relationships.
* SINTEF is a member of the UN Global Compact and Transparency International, and SINTEF’s ethics are guided by the principles highlighted by these organisations, as well as based on the regulations of the national ethics committees, the principles promoted by the European Group on Ethics in Science and New Technologies (EGE), and on international conventions such as the Vancouver Convention. When dilemmas of research ethics require an assessment beyond the scope of our guidelines, our Ethics Council and Ethics Representative, we refer to statements from the EGE.
* All SINTEF's employees are expected to act in accordance with the ethical guidelines and principles. As coordinator of the GreenCharge project, SINTEF will ensure that any ethical issues, which may arise, will be handled appropriately and in a transparent and fair manner.
### 7.2.2 Privacy strategy
To ensure compliance with all applicable ethical and privacy requirements, the
privacy strategy encompasses all data assurance activities that will be
performed in the context of the GreenCharge project. Figure 3 provides an
overview of deliverables and activities related to the data collection from
the pilots.
Figure 3: Processing personal data from pilots in GreenCharge
For information on how GreenCharge will handle and process contact
information, which is considered personal data not directly linked to the
pilots, see section 7.2.7.
### 7.2.3 Collecting personal data from pilots
Data collection activities (interviews, surveys, etc.) will be designed to
maintain privacy. Personal data will not be requested unless this is
absolutely necessary. Vulnerable groups like minors and individuals unable to
freely provide an informed consent will be excluded. Participation is
voluntary. Participants will be given the possibility to decline and withdraw
their participation at any time.
GreenCharge will collect pictures and video for use in communication
activities (website, newsletter, social media, see section 4.4.2). Pictures
and video can contain personal data if an individual is the focus of the image
or video. Examples include: 1) pictures/video of individuals stored together
with personal details (e.g. identity cards); 2) pictures/video of individuals
posted on the project website along with biographical details; 3) individual
images published in a newsletter. Examples of pictures and video that is
unlikely to contain personal data are: 1) pictures/video where people are
incidentally included in an image or are not the focus (e.g. at a big
conference/workshop); 2) images of people who are no longer alive (the GDPR
only applies to living people, see section 2.2).
When collecting pictures and video GreenCharge will follow established
guidance and best practice on collecting and processing such data to ensure
that we adhere to the legal requirements (e.g. guidance established by the
University of Reading, UK 10 ). Under no circumstances will pictures
containing personal information be publicly shared without the subject's
explicit consent.
### 7.2.4 Information letter and consent form
The participants will be given an information letter and a consent form (on
paper or electronically). The information letter will provide information
about:
* The type of data that will be collected during the study.
* How the data will be collected (interview, automatic data collection, etc.)
* What the data will be used for. The information letter will explain the purpose of the project and the expected results. It will also be explained that published information always will be anonymous, and that no personally identifiable information will be published in any way.
* How the data collected will be handled. The information letter will explain that personal data will be treated in full confidentiality and will be registered and stored in a secure manner. The data will be deidentified before it is processed (name or other characteristics serving to identify person will be replaced by a number and the list of identifiers will be kept separate from the data).
* Who will have access to the data. The information letter will state that data will be handled by a very limited number of authorised personnel and that confidentiality will be regulated by legal agreements. The data will be de-identified before it is discussed and processed within the project.
* The rights of the participants. The information letter will state that participation is voluntary and that participants have the right to see the data collected about them and that they can withdraw from the study at any time without any obligation to explain their reasons for doing so (contact information for such requests will be provided).
### 7.2.5 Protecting personal data collected from pilots
Personal data will be handled in accordance with European legislation on
privacy (GDPR). Under no circumstance will the deliverables or processes
compromise the individual right to privacy and satisfactory handling of
personal data.
All personal data will be stored on secure servers with access control managed
by the Data Controllers. Personal data will be handled by authorised
personnel, and no one will have access to the data unless this is necessary to
carry out the project work.
### 7.2.6 Using and sharing data from pilots
Analysis of the data (e.g. in evaluations) will be carried out on de-
identified and anonymised data.
At the end of the project, all personal data (audio and video files included)
will be deleted, and the deidentified data will be completely anonymised,
meaning that the links to the lists of keys will be deleted. No personal data
will be stored after the end of the project, unless explicit consent to do
this is given by the provider/owner of the data (see section 4.4.2). If such
permission is given, non-anonymous data will be stored for a maximum of 4
months after the contractual end of the project (to allow for finalisation of
scientific publications).
For other non-anonymous data, such as pictures and videos used for project
communication activities, these will be kept for up to 4 years after the end
of the project (see section 4.4.2 for more information). Such data will be
shared, upon explicit consent only, through the project website, newsletters,
and social media. If a party withdraws the consent to use this material
(pictures, videos), it will be deleted without delay.
The anonymous data will be documented and archived in a research data
repository as open research data, and thus placed at the disposal of
colleagues who want to replicate the study or elaborate on its findings. Any
publications, including publications online, neither directly nor indirectly
will lead to a breach of agreed confidentiality and anonymity.
The research outcomes will be reported without contravening the right to
privacy and data protection.
### 7.2.7 Managing contact information
Some of the contact information to external parties will be totally curated
and preserved by one partner. The dissemination partners PNO and ICLEI do for
example have their own pre-existing contact lists that will be used for
dissemination and communication purposes. These contact lists will not be
shared within the project, but the will be managed according to GDPR by these
partners.
Contact information for other external actors established just for the purpose
of the project will be managed within the project in accordance with GDPR. All
project generated contact lists will be stored in the GreenCharge SharePoint
project site hosted by SINTEF. Access control will be implemented to ensure
that only those who require this information to perform their activities can
access it. Access will be managed by SINTEF. Contact information will never be
shared with third parties, and only the essential information needed will be
kept and stored. On request from external parties, the project will provide
information on the personal information the project is managing related to
this party, as well as provide opportunity to correct or delete information
(upon withdrawal of consent).
# 8 Conclusions
Formal approval and release of this deliverable within the consortium
constitutes a formal commitment by partners to adhere to the data management
strategy and the procedures it defines. When the deliverable is formally
approved by the European Commission, this constitutes confirmation that the
procedures are considered by the European Commission to be adequate.
As coordinator of the GreenCharge project, SINTEF will ensure that any data
management issues which may arise during the project will be handled
appropriately and in a transparent and fair manner.
The DMP is a living document that will expand as the project evolves and new
information on data collection, generation and handling arise. Day to day data
management will happen through the online tools described in this document,
and through continuous collaboration between the coordinator, the Data
Controllers, the WP9 leader, and the T5.3 leader. A revised and extended
version of this DMP will be prepared towards the end of the project to reflect
the current status of data management in the project.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1330_AVENUE_769033.md
|
# Executive Summary
The current document constitutes AVENUE first version of Data Management Plan,
D1.4: “Initial Privacy protection & Data Management Plan” and is the 1st
version of the project’s guide as to how the Consortium will manage the data
throughout all its phases (collection, storing, sharing, processing etc.),
what decisions the Consortium will make regarding making data Findable,
Accessible, Interoperable and Re-usable (FAIR) and the mechanisms to enable
the data management decisions.
AVENUE aims at providing citizens with door-to-door public transport services
to facilitate their mobility through autonomous mini buses. The project aims
to include all potential types of users coming from diverse background and
travel habits and preferences in order to offer tailor-made solutions that
meet their needs.
To provide the above-mentioned solution, the project will collect user-related
data. The Consortium must fully comply with any laws and regulations in any
relevant jurisdiction relating to privacy or the use or processing of data
relating to natural persons. These include:
1. EU Directives 95/46/EC and 2002/58/EC (as amended by 2009/139/EC) and any legislation implementing or made pursuant to such directives and the Privacy and Electronic Communications (EC Directive) Regulations 2003;
2. from 25 May 2018, the EU General Data Protection Regulation 2016/679 ("GDPR"); and
3. any laws or regulations ratifying, implementing, adopting, supplementing or replacing GDPR; in each case, to the extent in force, and as such are updated, amended or replaced from time to time. In specific, following the EC guidelines for the Data Management Plans [1], **Chapter 1** introduces the purpose and intended audience of the current document as well as the interrelations to other project work items.
**Chapter 2** presents the nature of the data to be handled in AVENUE, its
categorisation, sources, the privacy policy as well as the template to be used
for describing the datasets and the objectives of the project that will be met
through these data collection and processing.
**Chapter 3** describes the data collection and storage processes, data
protection, retention policy, access, sharing, ownership as well as any
measures to be taken for preventing malevolent abuse of the research findings.
**Chapter 4** describes the processes and the mechanisms that will be applied
for making the data FAIR. **Chapter 5** refers to the necessary GDPR compliant
roles that will be allocated for the smooth operation of the project. It also
presents the project’s ethical policy, ethics helpdesk and all relevant
ethical aspects, which will/is further defined in Deliverables D11.1, D11.2,
D11.3 and D11.4.
Finally **, Chapter 6** concludes the document.
The current document is a living document and thus it will be updated during
the project lifetime as needed, including more detailed information regarding
collected data. The next official updated version will be publicly released as
part of D1.5 “Final Privacy protection & Data Management Plan”, in Month 48,
providing final information about the descriptions of different data sets, the
AVENUE repository, final DPIA report (the template form can be found in Annex
1.2, as well as any data embargos and data destruction periods).
IV
# Introduction
Public transport consists of a key element in the economic development of a
region and the quality of life of its citizens. Throughout the years,
municipalities and public transport operators aim to improve services through
an optimal balance between increased and improved service (more vehicles, more
km, comfortable, convenient, reliable, etc.), usage incentives (lower fares,
Parkingand-Rail etc.) and costs. At the same time, the most common criticisms
of public transport concern the low speed and flexibility compared to private
cars, the high transport fees and the reliability and availability of the
service.
“ **A** utonomous **V** ehicles to **E** volve to a **N** ew **U** rban **E**
xperience” (AVENUE) project will especially focus on introducing disruptive
public transportation paradigms led by SMEs on the basis of door2door services
and the nascent concept of the ‘Mobility Cloud’ aiming in setting up a new
model of public transportation.
## Project objectives
AVENUE aims to deploy and validate autonomous vehicles that are integrated in
the existing public transport services for several European cities and to
validate the safety of autonomous vehicles being used in complex urban
environments. Additionally, the project aims to develop and test new,
innovative and disruptive urban public transport services based on the use of
autonomous vehicles, so as to raise user awareness and increase user
acceptance. At the same time, AVENUE will ensure that the use of autonomous
vehicles within the context of urban public transport services is a new
experience for the passengers. Recommendations will be made for public
transport operators and authorities concerning the development and integration
of autonomous vehicles in the urban and sub-urban transport environments to
encourage the promotion of the advantages of public transport autonomous
vehicles to the public. Finally, AVENUE will evaluate the socio-economic
impact and the benefits of the deployment of autonomous vehicles for urban
public transport.
## Conceptual architecture
The figure below depicts AVENUE’s conceptual architecture, including all
different elements and roles needed to provide the personalized point-to-point
services:
1. Transport operators and bus manufacturers that will collaborate to improve fleet management, service optimisation and access bus performance analytics;
2. The AVENUE core service platform;
3. Transport infrastructure and the automobile and pedestrian traffic that provide the AVENUE core service platform with itineraries/status and city/traffic information respectively;
4. Autonomous minibus that includes the autonomous vehicle control;
5. Human assistant inside the vehicle that caters to special services addressing users with special needs;
6. The vehicle passengers that request the in-vehicle services;
7. The user interfacing (smartphones, call centre, cell phones) that handles in and out vehicles responses;
8. Out of vehicle users that interact with the AVENUE platform through transport requests and request of information/validation.
**Figure 1. AVENUE conceptual architecture**
## Purpose of the document
This document is the first version of the project’s Data Management Plan
(DMP). The purpose of this document is to provide an overview of the data set
types and categories collected during the lifetime of the project and to
define the project’s data management policy that will be further adopted by
the Consortium.
The data management plan defines how data in general and research data will be
handled during the research project and will make suggestions for the after-
project time. It describes what data will be collected, processed or generated
by the AVENUE core service platform and by all the AVENUE ecosystem, what
methodologies and standards shall be followed during the collection process,
whether and how these data shall be shared and/or made open not only for the
evaluation needs but also to comply with the General Data Protection
Regulation (GDPR) requirements. In addition, this document will define how
they shall be curated and preserved. The AVENUE Data Management Plan will be
updated by the evolution of the project.
The overarching project-related GDPR mechanisms are the following:
* **completing** the GDPR compliance template by all project partners that will collect, store and analyse data (template can be found in Annex 1.1);
* **preparing** the Data Protection Impact Assessment in two stages (template can be found in Annex 1.2);
* **determining** and assigning the roles of DPO (Data Protection Officer) at each pilot site; **defining** the data protection policy of the project.
This initial version of D1.4, which is submitted in Month 6, identifies a
first set of data sources, data categories, datasets, data types and metadata
that will be involved in the project and describes the data management process
that will be followed in the next steps of the project. This version also
includes how the data owners will contribute to further versions of the
deliverable to complete their dataset descriptions.
The Data Management Plan is a living document that will be updated with new
information as they arise throughout the duration of the project. The
Deliverable will be updated in M48 through deliverable D1.5 and will include
the final services offered to users by the AVENUE platform, final data from
all partners, finalised decisions on embargo policy as well as finalised
decisions and processes for any pending issues that are not specified early on
the project.
## Intended audience
The intended audience for AVENUE consists of project partners who are involved
in data handling in any manner during the project’s lifecycle. These are the
project Consortium and the partners that are involved in the pilot tests who
collect, store and process information throughout the project, the developers
that develop the platform and are expected to ensure data protection, the
partners that will publish their work as well as all members that participate
in the dissemination process of the project.
## Interrelations
Data Management aspects are closely related to:
1. **Ethics Issues** in AVENUE, especially in the context of collecting, managing and processing
(sensitive) personal data from real-life users;
2. **Security aspects** , e.g., data privacy and protection, data ownership, etc.;
3. **Design and development activities** in terms of defining the data that need to be collected or reused to offer tailored services to each specific user group (i.e., older people, people with disabilities, commuters, business travellers, tourists, etc.); and
4. **Legal issues** related to personal data (including sensitive personal data), security and privacy.
Therefore, this document will be updated as the work evolves and in close
synergy with the following tasks:
* WP1: T.1.4. IPR & Data Management Plan
* WP2: T.2.2. Passenger needs (including PRM) and requirements specification
* WP2: T2.3 Stakeholders identification, expectations and barriers imposed
* WP5: T5.2 Data and fleet coordination and management
* WP6: T6.3 Security and privacy control
* WP7: Autonomous vehicles for public transport demonstrators
* WP8: Socio-economic and environmental evaluation
* WP11 - Ethics requirements
The work packages interrelations are presented in the figure below (Figure 2).
**Figure 2 Interrelations of AVENUE's work packages (WPs)**
# AVENUE data
## Data summary
This section provides an explanation of the different types of data sets to be
produced or collected in AVENUE project, which have been identified at this
stage of the project. As the nature and extent of these data sets can evolve
during the project, more detailed descriptions will be provided in the updated
versions of the DMP. The descriptions of the different data sets, including
their reference, file format, standards, methodologies and metadata and
repository to be used are given below.
The aim of this chapter is to:
* provide a first categorization of the data;
* identify a list of the data types that will generated;
* provide a list of metadata that will be used to describe generated data and enable data re-use;
* provide recommendation on data collection and sharing process during the project and beyond
As the nature and extent of these data sets can evolve during the project,
more detailed descriptions will be provided in the updated versions of the
DMP. The descriptions of the different data sets, including their reference,
file format, standards, methodologies and metadata and repository to be used
are given below.
## Data presentation
### Data identification
The AVENUE project will produce different categories of data sets. Based on
the information collected from the above-mentioned template, data is
categorised appropriately, as follows:
* **Data types** : this category refers to the type of data in terms of its source and relevance (i.e. vehicle related data, user data etc.);
* **Data sets** : refers to the nature of a complete set of data, that may contain information about different topics (i.e. an excel file containing information about user preferences);
* **Dataset category** : refers to the categories of data based on the level of process (i.e. raw, pre-processed, aggregated, consolidated, metadata, etc.).
In order to identify and define the data types and data management procedures,
a template was circulated to the partners to be completed with information and
descriptions about the data and metadata that will be collected with their
service/tools. The description includes some technical information (such as
format, size, privacy level, etc.) but will be further clarified and updated,
if necessary, in following versions.
In detail, the template collected information about the following:
* The name of the data
* Whether the data was collected or created
* Data description
* Data category
* Data type
* Data format
* Data size
* Data Ownership
* Privacy level
* Data repository during the project (for private/public access)
* Data sharing
* Back-up frequency
* Status of data at the end of the project (destroyed or not)
* The duration of the data preservation (in years)
* Data repository after the project is complete
The information gathered was analysed so as to identify the types of data and
determine how to manage it within the Data Management Plan and as instructed
by the GDPR principles.
### Data sources
This section describes the data sources and data flows as determined by
**Figure 3** , which depicts a generic AVENUE diagram and demonstrates the
main data flows in three data flow stages. In AVENUE the following **user
groups** are determined as **data sources** : Autonomous Minibus, Data
providers, Customers, the AVENUE core services and the Operators. The
following figure (Figure 3) depicts the main data sources that will contribute
to the AVENUE core services and the main data flows, in three stages that are
transferred from one to another.
More specifically, during the first flow stage, initial data comes from the
following three user groups:
* **Autonomous Minibus** : This group provides information about the vehicles that participate in the project as well as information related to emergency control needs.
* **Data Providers** : data providers provide information about the public transportation, city conditions, traffic state and any other form of data concerning the urban environment where the pilots are taking place.
* **Users/Customers** : participating users/customers provide information from inside and outside the vehicles about their preferences, needs and wants.
Then, for the second flow stage, the above collected information flows towards
the **AVENUE core services** group, so as to cover the following activities:
* Handle passenger requests;
* Provide solutions according to transport policies and provide feedback to help them improve as necessary;
* Build and provide Vehicle-to-Platform interfaces and data transfer protocols;
* Achieve dynamic route planning and fleet optimisation;
* Improve decision making and dispatching of resources and vehicles as necessary; Ensure GDPR compliant data management.
Finally, information ends up at the **operators’** end to inform surveyors,
administrators and policy makers so that they can proceed with the appropriate
decision making and policy making that will bring about the expected results.
AVENUE project will collect a large amount of raw data to measure and identify
the needs of passengers to provide integrated point-to-point services, as well
as handle information from vehicles, transport providers and bus
manufacturers.
From raw data, a large amount of derived data will be produced to address
multiple research needs. Derived data will follow a set of transformation:
cleaning, verification, conversion, aggregation, summarization or reduction.
In any case, data must be well documented in order to facilitate and foster
sharing, to enable validity assessments and to enable its efficient use and
re-use. Thus, each data must be described using additional information called
**metadata** . The latter must provide information about the data source, the
data transformation and the conditions in which the data has been produced.
The following list further summarises and describes **the data types** and
**data** **sources** that will be collected:
**Autonomous Minibus** : The Autonomous Minibus information is the data
collected from the vehicles that take part in the pilots.
The data that describes the condition and the mobility of the vehicle. Such
data can be, for example, longitudinal speed, longitudinal and lateral
Vehicle info acceleration etc. The vehicle information is shared between the
Autonomous Vehicle Control (AVC) and the closed circuit that is used by the
operators for communication and control.
#### Picture 1 Sensors and equipment to collect data on the AVENUE mini buses
**Data providers:** The information from data providers refers to traffic
data, smart city/open data and public transportation information. More
specifically:
<table>
<tr>
<th>
T _raffic data_
</th>
<th>
Provides information about how travel speeds on specific road segments change
over time.
</th> </tr>
<tr>
<td>
Smart city/open data
</td>
<td>
Information that is freely available to users
</td> </tr> </table>
Public This data consists of information about public transportation services
and transportation schedules.
**Customers** : information from final users, which are out-of-vehicle people,
in-vehicle people and operators.
Out-of-vehicle Information from/about people waiting for AVENUE services.
In-vehicle Information from/about people enjoying the AVENUE services
(passengers).
Operators Information from/about professionals who work either as distant
surveyors or system administrators.
**AVENUE core services** : describes information from all previous user groups
that is necessary for the smooth and uninterruptible operation of AVENUE
services.
<table>
<tr>
<th>
Data management
</th>
<th>
Describes the way information is exchanged and managed.
</th> </tr>
<tr>
<td>
Decision making and dispatching
</td>
<td>
Refers to information necessary for the decision-making process and for the
dispatch of an autonomous vehicle towards a customer.
</td> </tr>
<tr>
<td>
Fleet optimisation
</td>
<td>
Information that allows for optimum use of fleet.
</td> </tr>
<tr>
<td>
Dynamic route
planning
</td>
<td>
Data concerning optimum route planning and taking.
</td> </tr>
<tr>
<td>
Vehicle-to-Platform interfaces and data transfer protocols
</td>
<td>
Information exchanged between the vehicle and the platform.
</td> </tr>
<tr>
<td>
Passenger requests
</td>
<td>
Information regarding the needs and preferences of passengers.
</td> </tr> </table>
Transport policies Information concerning the existing policies in the
transport sector.
### Data collection
Within the project’s framework, a socio-economic and environmental impact
assessment will be conducted. To this end, several methods will be applied to
collect the necessary information. The data that will be collected refer to
user experience, accessibility for persons with restricted mobility (PRM),
socio-economic and environmental acceptance. The following table (Table 1)
summarises the research methods and tools that will be used for the collection
of this data.
#### Table 1 Objectives, methods and tools used in user related data
collection
<table>
<tr>
<th>
Objective
</th>
<th>
</th>
<th>
Method
</th>
<th>
Tools
</th>
<th>
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
User experience
</td>
<td>
</td>
<td>
Qualitative
</td>
<td>
</td>
<td>
Longitudinal studies
Accompanied observation
Focus groups
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Quantitative
</td>
<td>
</td>
<td>
Shadowing/observation questionnaire
</td>
<td>
</td>
<td>
plus
</td> </tr>
<tr>
<td>
Socio-economic
</td>
<td>
and
</td>
<td>
Quantitative
</td>
<td>
</td>
<td>
Shadowing/observation
</td>
<td>
</td>
<td>
plus
</td> </tr>
<tr>
<td>
environmental acceptance
</td>
<td>
</td>
<td>
</td>
<td>
questionnaire
Large scale survey: zero measurement, intermediate measurement, final control
measurement
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
Big Data analysis (videos recorded in the busses could be analysed)
</td> </tr>
<tr>
<td>
Accessibility for PRM
</td>
<td>
Qualitative
</td>
<td>
</td>
<td>
Focus groups
Accompanied observation
</td> </tr> </table>
Data collection regarding user needs, traffic, preferences, etc. throughout
the pilots will be made using paper-and-pencil and online questionnaires. The
gross random sample will range from five to eight thousand residents, elected
randomly and living in a thirty to fifty-kilometre radius via post code areas.
Invitations will be sent by email (if personalised e-mail addresses are
available) with the link to the online questionnaire or by post with a paper-
and-pencil version of the questionnaire and a return envelope. The net sample
will depend on the return flow, although a minimum of 10% is expected. The
paper pencil version of the questionnaire will be scanned via _evasys_ (
_https://en.evasys.de/_ _)_ , the online questionnaire will automatically be
registered via _Unipark_ ( _https://www.unipark.com/en/_ ) and the
statistical analyses will be made via SPSS. Respondents will consent to be
contacted via email prior any project-related communication takes place in
order to comply with GDPR requirements. The process will be described in the
PIA report submitted in the final version of this Deliverable in Month 48.
However, the methodology applied will be reported in an intermediate stage by
involved partners, when pilot plans are set and before any testing is
conducted. As this Deliverable is treated as a living document, when the
processes are put in motion and are reported, then they will be added in the
Deliverable.
Vehicle operation related data will be collected through sensors, pavement
tapes, proximity sensors, seat sensors, surveillance cameras, etc. will be
managed in accordance with local policies of the operators.
### Data types, datasets and dataset categories
This section presents a short description of the data that will be collected,
generated and managed in AVENUE. More specifically, data is clustered into
three different sections: **data types** , **datasets** and **datasets**
**categories** . Data types are related to the source of the data, i.e.
vehicle data, traffic data, etc., datasets refer to the file extension of the
data and, finally, dataset categories refer to the level of process that the
data has undergone.
The **types** of data generated, collected and managed within AVENUE fall into
the following categories:
* Subjective data (user profile and user request related data)
* Vehicle related data
* Urban environment related data (traffic and city related data)
* Infrastructure related data
As far as **datasets** are concerned, the following will be handled by AVENUE:
* Reports (in the form of word or pdf documents)
* Excel files with raw data, as received by sensors, surveys etc.
* Video signals
* Database
The following table is a template that will be used to describe the datasets.
**Table 2 – Dataset Description template**
<table>
<tr>
<th>
**Dataset Reference**
</th>
<th>
AVENUE_WPX_TX.X_XX
Each data set will have a reference that will be generated by the combination
of the name of the project, the Work Package (WP) and Task (T) in which it is
generated and (for example: AVENUE
_WP3_T3.4_01)
</th> </tr>
<tr>
<td>
**Dataset Name**
</td>
<td>
Name of the data set
</td> </tr>
<tr>
<td>
**Dataset Description**
</td>
<td>
Each data set will have a full data description explaining the data
provenance, origin and usefulness. Reference may be made to existing data that
could be reused.
</td> </tr>
<tr>
<td>
**Standards and metadata**
</td>
<td>
The metadata attributes list
The used methodologies
</td> </tr>
<tr>
<td>
**File format**
</td>
<td>
All the format that defines data
</td> </tr>
<tr>
<td>
**Data Sharing**
</td>
<td>
Explanation of the sharing policies related to the data set between the next
options:
**Open** : Open for public disposal
**Embargo** : It will become public when the embargo period applied by the
publisher is over. In case it is categorized as embargo the end date of the
embargo period must be written in DD/MM/YYYY format.
**Restricted** : Only for project internal use.
Each data set must have its distribution license.
Provide information about personal data and mention if the data is anonymized
or not. Tell if the dataset entails personal data and how this issue is taken
into account.
</td> </tr>
<tr>
<td>
**Archiving and**
**Preservation**
</td>
<td>
The preservation guarantee and the data storage during and after the project
(for example: databases, institutional repositories, public repositories …)
</td> </tr> </table>
The AVENUE project will produce different dataset categories. More
specifically:
* **Context data** : data that describes the context of a pilot.
* **Acquired and derived data** : data that contains all the collected information related to a pilot.
* **Subjective data** : questionnaires, surveys, personal and group interviews.
* **Raw/unprocessed data** : data collected directly from the source (either objective or subjective).
* **Metadata** : descriptions of data that will facilitate the data analysis and data pre-processing.
* **Aggregated data** : data summary obtained by reduction of acquired data and generally used for data analysis.
* **Consolidated data** : data collected across sites and per data type.
### Subjective data types
This type of data is collected during all types of qualitative surveys,
questionnaires, personal and group interviews (WP2) that take place in the
project. This data is **collected** , **managed** and **processed** by AVENUE
partners. The data may be collected from users/customers, data providers and
operators as well as other types of stakeholders. In all cases, the data
is/will be anonymised to ensure privacy and protection of the participants’
identity. In the case of the users/customers, subjective data will mostly deal
with travel preferences and needs.
### Data privacy policy
Participants’ personal data will be used in strictly confidential terms and
will be published only as statistics (anonymously). In addition to the ethical
aspects analysed, the following safety provision will be considered during the
project:
Only one person per site (relevant Ethical issues responsible) will have
access to the relation between test participants’ code and identity, in order
to administer the tests. This means that data will be pseudonymised, making
the compliance with GDPR essential. The GDPR compliance form will be filled in
by all project partners throughout the duration of the project. The GDPR
compliance form is found in Annex 1.1. Further to that, all partners will fill
the Data Protection Impact Assessment form (Annex 1.2), again throughout the
duration of the project. The filled in forms will be presented in D1.5 (due
for Month 48). One month after the pilots end, this reference will be deleted,
thus safeguarding full anonymisation of results. The data will be gathered at
each pilot site with consideration for the following aspects and compliance to
GDPR:
* **Confidentiality and data protection** : Participants, and the data retrieved from them (performance or subjective responses) must be kept anonymous unless they give their full consent to do otherwise.
* Identifiable personal information should be encrypted (i.e. pseudonymisation and coding). Otherwise ethical approval is necessary specifically for this;
* Pseudonymisation is preserved by consistently coding participants with unique identification codes. Only one person at each pilot site will have access to personal identifiers (if any).
* Each individual entrusted with personal information is personally responsible for their decisions about disclosing it;
* Pilot site managers must take personal responsibility for ensuring that training procedures, supervision, and data security arrangements are sufficient to prevent unauthorised breaches of confidentiality.
* **Encrypted and pseudonymised data** : To mitigate the risks involved with processing data subject information, any data collected will be encrypted or pseudonymised to the extent reasonably possible, so that individuals cannot be identified, as recommended by Article 32 of the GDPR. Pseudonymised data is data that can no longer be attributed to a specific data subject without the use of additional information, provided that such additional information is kept separately and is subject to technical and organisational measures to ensure that the personal data are not attributed to an identified or identifiable natural person [2].
Any data that will be stored will not consist of personal information that can
lead to the person giving the information. Nevertheless, stored date relate
only to users’ preferences in daily activities or health problems, not to a
person’s beliefs or political or sexual preferences.
The following data will **_not_ ** be stored:
* Medical info.
* Name, address, telephone, fax, e-mail, photo, etc. of the user (any direct or indirect link to user ID).
* User location (retrieved every time dynamically by the system, but not stored).
* Any other preferences/actions by the users, except the ones motioned explicitly above.
* Destination of the users (will be used to provide the service but will not be stored).
In addition, aggregated data and conclusions related to impact estimations
will be shared with researchers outside the Consortium upon agreement to do
so, as the project participate in the Open Research Pilot. This will be
decided and finalised in the updated version of the Data Management Plan in
D1.5.
For statistical analysis, the decisions made by the participants may be
associated with their type, their travelling preferences, age, gender,
nationality, familiarity and use of services and transport modes, etc. This
will be decided and finalised in the updated version of the Data Management
Plan in D1.5.
AVENUE core services platform aims to provide users with high quality door to
door services. However, it will perform this taking into consideration the
following:
* All required user profile data will be stored at his/her mobile device and be securely protected (by password or highly advanced security mechanisms; this depends upon how much the user is willing to invest on it).
* Relevant preferences relate to his/her mobility, favourite locations, frequent routes will be stored in their mobile device and be securely protected (by password or highly advanced security mechanisms; this depends upon how much the user is willing to invest on it).
* The user’s location, route and destination will be only temporarily stored (i.e. during a trip), in order to assist the user and will be automatically deleted afterwards. Any location information will be stored on the user’s mobile device, discharging the consortium from any responsibility of handling sensitive data.
* The user will have the capacity to view, change or delete, as he/she wishes, all stored data by the system (including his/her profile data), with the help of a very simple and multimodal interaction (touch, buttons and voice input supported).
* All AVENUE applications are using unobtrusive sensors, i.e. embedded on a seat or wearable, avoiding the use of cameras and other visual detection sensors that may be misused by an intruder.
The above data privacy policy applies also on any communication dissemination
activities held within the project to promote its results and communicate new
knowledge.
## GDPR compliant system architecture
The following figure depicts the detailed architecture of the AVENUE system.
At the bottom of the detailed AVENUE architecture lies the Multisensor Data
Collection System (MDCS). This is the sensing layer that consists of several
sensors, whose purpose is to detect changes and events in their environment
(either the autonomous mini bus, the interior or the exterior). This layer
consists of three groups of sensors. As depicted in the graph (Figure 4), the
sensors can be road pavement tapes, proximity sensors, privilege passenger
presence, seat sensors, surveillance cameras, route cameras, geographic
positioning systems (GPS), global system for mobile (GSM), Light Detection And
Ranging (LIDAR) or accelerometers.
The information collected by the sensors is sent to other complex electronics
through the **Data Transfer** layer, entailing all processes for the transfer
of information from the collection devices and equipment towards the command-
control-communication level. The **Command-ControlCommunication (C3)** is the
main services layer for the AVENUE project. Within this layer, software
components will receive information from the sensors’ layer, process it and
allocate it to the respective sub-components to support activities such as
decision making, route planning, management of passenger requests, incident
management. At the same time, there will be an Automatic Threat Detection
component to further raise passengers’ acceptance, evaluating the behaviour of
passengers. A sensor’s data processing and modeling component will do all the
processing and update the Sensor’s Database. A Decision Support System (DSS)
will assist with the decision-making activities and will help administrators
and surveyors to make decisions about possible issues.
Within the C3 level, the data collected and captured undergoes encoding before
being preprocessed. The data that is encoded includes data from infrastructure
sensors, data regarding passengers’ registration and presence as well as user
location. Data encoding implies that it cannot be retrieved from the database
by unauthorized users, complying with GDPR regulations about data security.
## Data documentation
Data documentation will mostly be metadata and will be used in order to
recognize each data type and source. The initial documentation details that
will be included for each service are shown below.
## File naming
File naming depends largely on service and the datasets to be derived by this
service and/or connected with this service. They have to be **consistent** and
**descriptive** .
The creation of the unified database will be based on a common file naming and
organizing among partners in order to help partners organize effectively and
efficiently their work and, of course, ease collaboration with other partners.
Additionally, partners using this file naming system will find it easier to
work (and share) the correct version of data and accompanying metadata files.
The following file naming offers a consistent naming of the files in order to
make it easier to identify, locate and retrieve the data files.
This file and folding naming system will be used for all data and metadata
files.
1. **Project acronym:** AVENUE
2. **Service related:** Service acronym (initials based on the name given in DoA)
3. **Location (where it resides):** e.g. DFA_DB (Daily Functions Assistance Database)
4. **Researcher name/initials:** Ian Smith (i.e. IS; alternatively, could use database user credentials)
5. **Pilot identifier:** e.g. GP for Geneva Pilot site
6. **Date or range of pilot:** 011216
7. **Type of data:** CE for calendar entries
8. **Conditions:** condition_user group (e.g. baseline_PT)
9. **Version number of file:** Only singular number are acceptable (1, 2, 3)
10. Three letter file extension for application specific files (e.g. csv)
Any spatial characters are avoided because they might not work well with
certain programmes and avoid spaces (i.e. use underscores instead). Each data
folder will include a regularly updated README.txt in the directory to explain
the codes, abbreviations used and, in general, the coding practices and naming
conventions used. Based on the example used above, an efficient naming
convention within the AVENUE project looks like that:
**AVENUE_DFA_ DB_IS_GP_011216_CE_baseline_PT_v1.csv**
# Data Collection and Storage Methodology
As previously stated, data will be stored in secure server systems. Only the
PC and selected personnel from demonstrators will possess the key to re-
identification, making the data pseudonymised. No data related to personal
information of the involved participants will be collected and stored.
Instead, all participants will be granted with an identification number based
on each participant’s role in each of the city (role ID), allowing mapping of
participants’ actions during the execution and pilot realisation phase. The
relationship between the role ID and the participant will be recorded at the
repository and will be stored separately and securely. This file will be
accessible only to the corresponding site manager. The key to link the
participant’s name to the code which identifies the data file will not be
provided to anyone and the privacy of the data will be protected. Furthermore,
data will be kept for the least a period of time necessary to accomplish the
goals of the project and the population of the AVENUE Repository. This period
of time will be defined when the pilot data collection process will be
established and after Consortium consensus.
In any case, all data that will be considered confidential from the pilots
will be discarded by the project completion, whereas only the public models
and respective datasets that will be described in detail in the Data
Management Plan will be kept open. Partners will define the data embargo
period (if any) and those who are data owners will decide which datasets (or
parts of) will be openly shared. Such decisions will be made after AVENUE
datasets have been collected.
## Data protection
In order to protect the collected data and control unauthorised access to the
AVENUE data repositories, only authenticated personnel will have access to
pilot-specific data collected. During the proposed system lifecycle, a
holistic security approach will be followed, in order to protect the pillars
of information security (confidentiality, integrity availability) from a
misuse perspective. The security approach will be identified by a methodical
assessment of security risks followed by their impact analysis. This analysis
will be performed on the personal information and data processed by the
proposed system, their flows and any risk associated to their processing. The
details on measures to prevent misuse of data and research findings as well as
the performance of security audit and data privacy assessment will be reported
in the related Ethics deliverables.
Towards the protection of personal data of volunteer pilot participants, the
following issues will be taken into account:
* All data associated with a recognizable person will be held private.
* Individual data on participants will be used in strictly confidential terms and will only be published as statistics (anonymously).
* Any data or information about a person will be held private, regardless of how this data was acquired. Therefore, data obtained incidentally within AVENUE project will be handled with confidentiality. This accidental obtainment does not substitute the compulsory procedure, in which researchers need each participant’s explicit consent to obtain, store and use information about them.
* Data collection will be anonymous but data are defined as pseudonymized because their personal details will be securely stored. Only one person will have access to these details and no access to pilot data. Contact details are kept in case we wish to contact participants for participation in another pilot phase. However, as pilot plans are not still in place, we state that data are pseudonymised. If pilot data collection will not entail keeping contact details, then data will be fully anonymized.
* The acquired data will under no circumstances be used for commercial purposes.
During the AVENUE project, responsibilities will be clearly assigned for the
overall management and control of research findings and the controlling of
access rights. The person who will be responsible on issues for data security
will directly inform to the quality board, the ethics helpdesk and the project
coordinator.
## Data storage, backup and repository
Data collected by the AVENUE services and the pilots must be securely stored
and regularly backedup. In some cases, multiple copies should be made,
especially for large datasets that need to be stored in large capacity
external drives. To this end, the data management plan has to ensure the
following checklist is ticked:
How will the raw data be stored and backed-up during the research project?
How will the processed data be stored and backed-up during the research
project?
What storage medium will be used for the storage and backup strategy?
Is the backup frequency sufficient to ensure data restoration in the event of
data loss? Is the data backed up at different locations?
A **Data Repository** will be created for the purposes of storage during the
project and potentially for a period of time after the projects is complete.
The project’s repository will be hosted at the University of Geneva (UniGe).
It is physically located in the premises of the university, where there will
be regular back-ups and continues intrusion controls with the use of advanced
security detection systems. Further to that, the repository will be strongly
protected in conformation with current security practices.
Access to the data repository will be given to project participants through an
identification number based on each participant’s role in each of the city
(role ID), that will allow the mapping of participants’ actions during the
execution and pilot realisation phase. This file will be accessible only to
the corresponding site manager. The key to link the participant’s name to the
code which identifies the data file will not be provided to anyone and the
privacy of the data will be protected. The relationship between the role ID
and the participant will be recorded at the repository and will be stored
separately and securely. Data will be kept for the least period of time
necessary to accomplish the goals of the project and the population of the
AVENUE Repository. In any case, all data that will be considered confidential
from the pilots will be discarded by the project completion, whereas only the
public models and respective datasets that will be described in details in the
Data Management Plan will be kept open.
However, not all partners have yet decided which strategy to follow and how to
proceed with their data storage policy and backup frequency. This will be
finalised in the next version of the Data Management Plan D1.5.
It is common practice to store the data for a time period of 2 to 3 years
after the project is complete. The data produced in the project as Public
Deliverables have, in principle, no expiration data and can be kept for at
least ten years in the project’s repository. Data that is collected from
interviews, vehicle operations, pilots etc. will be handled differently; the
anonymised part of data will be retained (and possibly made available to
researchers) while vehicle operation data will be handled according to the
policies of the operators as the data may be requested for legal purposes by
the authorities. This data will not be public and will be erased from the data
repository only to remain stored at the operators’ systems.
Partners may choose to keep the data for some time, indefinitely or even
delete as soon as the project is complete. NAVYA and CEESAR have decided that
information that is collected by sensors regarding incidents/accidents,
nominal behaviour, reports and incident/accident database will be stored in a
private cloud for a duration of four (4) years and will be destroyed at the
end of the project. Other partners will decide about the duration of storage
within the following months. This is an issue to be determined as the project
continues and will be finalised in the updated version of the Data Management
Plan, D1.5 due for M48.
## Data retention and destruction policy
Within the AVENUE Data Management Plan, the open research data retention and
destruction strategy will be also reported along with the limits on their
secondary use and their disclosure to third parties. A number of critical
factors that are relevant for data retention will be taken into account,
namely:
i) Purpose of retaining data, ii) Type of open data collected, iii) Policy
access to the open data, iv) Data storage, security and protection measures
and v) Confidentiality and anonymity of data.
According to the project’s data retention policy, the data stored for the
purposes of the project will be deleted after 4 years after the completion of
the project. This will take place from authorized partners to ensure data
deletion is conducted in a correct and legal manner and it is also subject to
changes and modifications during the project, if considered necessary.
Regarding **data destruction** , as computerized data (hard disk drives) will
be used for data storage, existing methods for permanent and irreversible
destruction of the data will be utilized (i.e. full disk overwriting and re-
formatting tools). In all cases the data protection and privacy of personal
information will be governed by the following principles, which consist of
part of an overall information security policy:
* Protective measures against infiltration will be provided;
* Physical protection of core parts of the systems and access control measures will be provided;
* Logging of AVENUE system and appropriate auditing of the peripheral components will be available.
## Data access, sharing and reuse
At each pilot site, a nominated person will be responsible for overseeing that
data are safe and secure. Overall, data will be stored in secure server
systems and will be anonymised. Only the PC and selected personnel from
demonstrators will possess the key to re-identification. No data, related to
personal information of the involved participants will be collected and
stored.
One person will be allowed to have **access** to full datasets (i.e. higher
authorisation level) and the rest of the data team will have medium or lower
level of authorisation. Data will be stored in secure areas (physical,
network, private cloud-based). Higher level of authorisation is granted only
for sensitive and personal data. According to the system architecture,
sensitive information received by sensors and users is encoded, making access
to the data from unauthorised users impossible. Data to be shared for analysis
or transferred to the AVENUE database will not include any personal or
identification data.
Data collection, storing, accessing, and sharing abides to the international
legislation (Data Protection Directive 95/46/EC “on the protection of
individuals with regard to the processing of personal data and on the free
movement of such data”) and guidelines.
Different levels of authorisation will exist also for remotely accessing data.
High level access to data will not be possible outside the work premises, as
they are defined at each pilot site.
Use of cloud store data will be available for medium and lower level of
access. Not all individuals will have the same access privileges in order to
avoid data corruption, loss and damage. Dataset owners will have full access
(read, write, update, delete), however, individuals who want to use/reuse the
dataset will be able to read and download but not make any changes or
modifications to the specific dataset. Of course, all datasets will be
password-protected. In some cases, encryption will be necessary.
The main restrictions with regards to confidentiality are the following:
− Name
− GPS coordinates (only metadata or surrogates)
− Raw video and audio recordings
The above data is identified based on the initial data pools set by partners
responsible for services/tools. Other data restrictions might arise during the
course of the project. If so, they will be finalised in the next version of
the deliverable, D1.5.
Concerning **data sharing** , under Horizon 2020, publications resulting by
work performed within a project have to be in open-access journals [3].
Participating in an open scholar community can help make the work of partners,
and the project, more visible to researchers who work in similar disciplines
and research areas. Specifically, for AVENUE, publishing in open-access
journals is sought. Relevant dissemination activities target, organize and
manage publishing efforts (WP10).
**Data re-use** by external researchers and other stakeholder groups will be
feasible for selected datasets. The embargo period will be at least the
duration of the project, as partners would like to easily manage the data
whilst collection, analysis and reporting is ongoing. Sharing and-reuse will
be applied in the central database according to data depositor’s wishes and
suggestions.
## Data ownership
Any data gathered during the lifetime of the project are the ownership of the
beneficiary or the beneficiaries (joint ownership) that produce them according
to **subsection 3, Art. 26 of the signed Grant Agreement (769033** ). The
beneficiaries have the intellectual property rights of the data they collect
and re-use of data is defined by the limitation they might set in how they
will make data available. This means partners decide if they make data open-
access (no additional restrictions on access to data or publications) or there
is an embargo period, whereby permission for accessing the data is given after
a certain period of time. As datasets have not been formed yet and services
are to be enhanced and connected to the AVENUE app centre, therefore this
information will be available in an updated version of this deliverable.
## Measures for preventing malevolent/ criminal/terrorist abuse of research
findings
During the AVENUE project, responsibilities will be clearly assigned for the
overall management and control of research findings and the controlling of
access rights. The person who will be responsible on issues for data security
will directly inform to the quality board and the project coordinator. The
research findings will be protected from malevolent/criminal/terrorist abuse
by following strictly procedures, as they will be defined by the Ethical
Advisory Board.
# FAIR Data
To further promote knowledge transfer and to contribute towards new research
content and results, AVENUE participates in the Open Research Data Pilot
(ORDP). However, any data that is identified and labelled as “restricted” or
under an “embargo” period will be excluded from the ORDP. To accommodate this,
the data that will be included in the ORDP should be handled according to the
FAIR principles, meaning that the data that will be generated during and after
the project will be made **findable** , **accessible** , **interoperable** and
**reusable** .
The FAIR principles are applied in AVENUE due to the fact that they serve as a
template for lifecycle data management and they ensure that the most critical
components are covered. Further to that, H2020 endorses the FAIR principles
and encourages their implementation amongst its projects regardless of
scientific and research disciplines.
To make data **findable** , including provisions for metadata the following
need to be taken into consideration:
* The datasets will have very rich metadata to facilitate the findability.
* All the datasets will have a Digital Object Identifier provided by the AVENUE public repository.
* The reference used for the dataset will follow this format: AVENUE_WPX_AX.X_XX, including clear indication of the related WP, activity and version of the dataset.
* The standards for metadata will be defined in the “Standards and metadata” section of the dataset description table (see Table 3 for the current version of the template). To make data **accessible:**
* Datasets openly available are marked as “Open” in the “Data Sharing” section of the dataset description table (see Table 3, Annex 2).
* The repository that each dataset is stored, including Open access datasets, are mentioned in the “Archiving and Preservation” section of the dataset description table (Table 3, Annex 2). The repository that will be used is still under consideration.
* “Data sharing” section of the dataset description table (Table 3, Annex 2) will also include information with respect to the methods or software used to access the data of each dataset.
* Data and their associated metadata will be deposed either in a public repository or in an institutional repository.
* “Data sharing” section of the dataset description table (Table 3, Annex 2) will outline the rules to access the data if restrictions exist. To make data **interoperable:**
* Metadata vocabularies, standards and methodologies will depend on the repository to be hosted (incl. public, institutional, etc.) and will be provided in the “Standards and metadata” section of the dataset description table (Table 3, Annex 2). To make data **re-usable** (through clarifying licenses):
* All the data producers will license their data to allow the widest reuse possible. More details about license types and rules will be provided in the next version of the deliverable.
* “Data Sharing” section of the dataset description table (Table 3, Annex 2) is the field where the data sharing policy of each dataset is defined. By default, the data will be made available for reuse.
* The data producers will make their data available for third-parties within public repositories only for scientific publications validation purposes.
# AVENUE Ethical Policy
AVENUE project involves data collection from users in the context of
demonstration activities. All national legal and ethical requirements of the
Member States where the research is performed will be fulfilled. Any data
collection involving humans will be strictly held confidential at any time of
the research. This means in detail that:
* all test participants will be informed and given the opportunity to provide their consent to any monitoring and data acquisition process; the subjects will be volunteers and all test volunteers will receive detailed oral information.
* no personal or sensitive data will be centrally stored. In addition, data will be scrambled where possible and abstracted in a way that will not affect the final project outcome.
Furthermore, participants will receive in their own language:
* a commonly understandable written description of the project and its goals; the planned project progress and the related testing and evaluation procedures;
* advice on unrestricted disclaimer rights on their agreement.
## AVENUE Ethics Helpdesk
To properly address all ethics issues that have come up and/or will come up
during the project, an Ethics Helpdesk will be set up. The **Ethics Helpdesk**
will oversee the research in order to guarantee that no undue risk for the
user, neither technically nor related to the breach of privacy, is possible.
This allows the Consortium to implement the research project in full respect
of the legal and ethical national requirements and code of practice. Whenever
authorizations have to be obtained from national bodies, those authorizations
will be treated as documents relevant to the project. Copies of all relevant
authorizations will be submitted to the Commission prior to commencement of
the relevant part of the research project.
The procedures and the criteria that will be used to identify/recruit research
participants will be kept on file and submitted upon request. As far as the
informed consent procedures implemented are concerned, they will also be kept
on file and submitted upon request (see paragraph 5.4 below). All used
assessment tools and protocols within AVENUE demonstrators will be verified
beforehand by its Ethics helpdesk regarding their impact to business actors
and end users prior to their application at the sites. The helpdesk takes
responsibility for implementing and managing the ethical and legal issues of
all procedures in the project, ensuring that each of the partners provides the
necessary participation in AVENUE and its code of conduct towards the
participants. Each city will have its own Ethics Committee and one person will
be nominated per site as responsible for following the project’s
recommendations and the National and European legislations.
## EU General Data Protection Regulation (GDPR) compliance
The EU General Data Protection Regulation (GDPR), entered into force on
24/05/2016 and its provisions will be applicable in all EU member states on
May 25th, 2018. The AVENUE project will be developed in full awareness of the
GDPR requirements.
According to the Information Commissioner’s Office (ICO), there are seven key
principles set by GDPR
[4]:
* Lawfulness, fairness and transparency;
* Purpose limitation;
* Data minimisation;
* Accuracy;
* Storage limitation;
* Integrity and confidentiality (security); Accountability.
The above principles consist of the core of the methodology, according to
which all necessary steps are taken to become GDPR compliant. AVENUE, as a
European funded project with a Consortium of industry and research members,
needs to demonstrate compliance by maintain a record of all data processing
activities. The personal data that will be collected during the research
should be kept secure through appropriate technical and organisational
measures. The types of privacy data that GDPR protects are:
* Basic identity information such as name, address and ID numbers;
* Web data such as location, IP address, cookie data and RFID tags;
* Health and genetic data;
* Biometric data;
* Racial or ethnic data; Political opinions;
* Sexual orientation.
The above are considered **personal data** .
According to the European Commission (EC) [5], when collecting participants’
data, people must be clearly informed about the following minimum:
* What is project AVENUE;
* Who the Consortium consists of;
* Why the project will be using their data;
* The categories of the concerned;
* The legal justification for processing their data;
* How long will the data be stored for;
* Who else might have access to the data;
* Whether the data will be transferred outside the EU;
* Participants have a right to have a copy of the data, the right to file a complaint with the Data Protection Authority (DPA) and the right to withdraw their consent.
For all the above, information will be provided in a written format, in a
transparent, concise, accessible and clear manner, free of charge.
## GDPR roles
In order to face the data management challenges in an efficient manner, all
AVENUE partners have to respect the policies set out in this DMP. Within the
framework of GDPR, the roles that are identified for AVENUE are those of the:
* Data Protection Officer (DPO),
* Data controller,
* Data processor and
* Data producer
Each partner will allocate the persons responsible for each role. The role
allocation will be finalised in the next version of this deliverable, D1.5 due
for M48. Further to that, the Data controller and Data Processor will fill in
the GDPR Data Processing record template for the DPO to validate it. The Data
Controller’s record template and the Data Processor’s record template are
found in Annex 3.
## Ethical and legal issues related to data sharing
AVENUE proposed solutions does not **expose** , **use** or **analyse**
personal sensitive data for any **purpose** . In this respect, no ethical
issues related to **personal sensitive data** are raised by the technologies
to be employed in large scale demonstrators foreseen in Switzerland, France,
Denmark and Luxembourg. However, AVENUE Consortium is fully aware of the
privacy-related implications of the proposed solutions and respects the
ethical rules and standards of H2020, and those reflected in the Charter of
Fundamental Rights of the European Union. Generally speaking, ethical, social
and data protection considerations are crucial and will be given all due
attention.
All relevant principles and the main procedures regarding privacy, data
protection, security, legal issues and ethical challenges are defined in the
Project’s Ethics Manual and will be updated in their upcoming versions.
Further to that, the general principles for handling knowledge and IPR within
AVENUE will be settled in a Consortium agreement (CA), signed by the AVENUE
Consortium at the project start. These principles are in line with H2020 IPR
recommendations.
The described procedures have been drafted and will be updated in consultation
with the project’s Ethics Management Panel (composed of one external member,
the Coordinator, the Technical & Innovation Manager and the Quality Manager)
that will act as supervisors of the ethical activities of the project and the
local ethics committees at each pilot site, in order to take into account both
European and national ethical and legal requirements.
## Informed Consent
AVENUE scenarios will target participants with competence to understand the
informed consent information. Pilot sites, i.e., AVENUE partners participating
in the pilots, will receive only anonymised and coded or pseudonymised
information. Any recorded data will be available to pilot sites only in
anonymised format.
The consent procedures will be carefully determined and managed in WP11 and
used in WP7 that will manage the demonstration activities which will be
performed in the selected cities. The informed consent form, which each
participant will be asked to complete prior to their participation in the
pilots, aims at ensuring that the user accepts participation and is informed
about all relevant aspects of the research project; it will be collected in
written form after the users have been provided with clear and understandable
information about their role (including rights and duties), the objectives of
the research, the methodology used, the duration of the research, the
possibility to withdraw at any time, confidentiality and safety issues, risks
and benefits. The templates of the informed consent/assent forms and
information sheets will be included in the deliverables D2.10-D2.12. The basic
elements of the AVENUE informed consent include:
1. The objectives of the study, its duration and procedure
2. The purpose of the pilots
3. Description of the type of information to be collected
4. Privacy and data protection procedures
5. Appointed data collectors and processors
6. Data ownership, storage location and storage duration
7. The possibility to decline the offer and to withdraw at any point of the process (and without consequences)
8. Contact person
Further details on the informed consent can be found in the Description of
Action, Part B.
# Conclusions
This deliverable provides an overview of the data that AVENUE project will
collect, manage, handle and produce along with related data processes and
requirements that need to be taken into consideration. The document outlines
an overview of the data types, sources, categories and collection methods and
describes the processes that will be followed within the core of the AVENUE
project regarding their processes.
The Data Management Plan is a living document and will be enriched along the
project’s lifetime as new information and decisions arise. The next version
will include all final decisions regarding data storage, data ownership, data
access, data sharing and repository issues.
To prepare for the next version that is due in Month 48 of the project, the
partner handling the data management plan will circulate the necessary
guidelines to the Consortium as the project progresses, so as to receive
updated, valid and precise information before finalising the next deliverable.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1331_RESIST_769066.md
|
# Executive Summary
RESIST (Resilient Transport Infrastructure to Extreme Events) is a H2020
framework project funded under grant agreement 769066 that aims to increase
the resilience of seamless transport operation to natural and man-made extreme
events, protect the users of the European transport infrastructure, and
provide optimal information to the operators and users of the transport
infrastructure.
The project addresses extreme events on critical structures, implemented in
the case of bridges and tunnels attacked by all types of extreme physical,
natural and man-made incidents, and cyber-attacks. The RESIST technology will
be deployed and validated in real conditions and infrastructures in 2 pilots,
first taking part in Egnatia odos T9 bridge and the second in Milleures
viaduct and st petronella tunnel in Italy.
The present deliverable represents the first updated version of the Data
Management Plan for the RESIST project, describing the procedures for data
collection, storage and processing within the framework of the project that
was produced on m4 and contains all the changes and adaptations required as
proposed on the DoA.
Additional to any updates to the text, and in order to make tracking of
changes easier the author has added a table of changes, located before the
executive summary. This table includes the page number and a description to
the change / update done.
The data collected, generated or reused within the context of the project,
will be processed only for scientific reasons.
All the data related activities within the RESIST project will comply with the
requirements of the General Data Protection Regulation. In parallel, RESIST is
part of the Pilot on Open Research Data so it will make sure that it follows
the FAIR principles thus making its data Findable, Accessible, Interoperable
and Re-usable.
This deliverable provides a brief description of the data managed by RESIST.
In principle, it’s worth noting that only pseudonymous or anonymous (depending
on the case) data that are necessary for the scientific analyses are expected
to be stored in the long term in RESIST databases hosted in secured servers
that will be regularly backed up, and accessible only by authorised users.
The consent of the users to the processing of their data will be asked prior
to their involvement in any of RESIST activities, upon being well informed on
the project’s nature. Additionally, users will have the option to easily
withdraw their consent and to exercise their rights deriving from the General
Data Protection Regulation, such as the right to be forgotten.
The RESIST outputs in terms of deliverables and/or any project outcomes
related scientific publications will be named and indexed with appropriate
keywords and will be available via the project website and in research fora.
_Keywords: Data Management, Protection of Personal Data, Informed consent,
Open Research Data Pilot, FAIR data._
# Introduction
## Participation in the Open Research Data Pilot
RESIST participates in the Pilot on Open Research Data launched by the EC
along with the Horizon2020 programme. The consortium supports open science,
and the large potential benefits to the European innovation and economy
stemming from allowing reusing data at a larger scale. Therefore, the majority
of data produced by the project will be published with open access. This
objective will obviously need to be balanced with the other principles
described below.
## Objectives of the Deliverable
The overall objective of this deliverable is to update the RESIST Data
Management Plan with all and any updates have occurred from the evolvement of
RESIST technologies while keeping the spirit of the original DMP intact.. All
the principles and goals stated in the D 1.4 deliverable remain unchanged.
Those principles aim to improve and maximize access and re-use of research
data generated by Horizon 2020 projects and takes into account the need to
balance openness with the protection of scientific information,
commercialization and Intellectual Property Rights (IPR), privacy concerns,
security as well as data management and preservation questions. Dedicated
deliverables on Ethics such as “D12.1 H-requirements No.1” and “D12.2 POPD -
Requirement No. 2” are devoted to explain how the RESIST project activities
are complying with Protection of Personal Data (POPD) requirements established
by the EC and national regulations, as well as with GDPR with respect to the
privacy of EU citizen. The data management plan describes the principles on
data security to show how data are being collected, stored, documented, shared
and reused during and after the project’s lifecycle.
Some points raised and decisions reflected in this document that used to be
tentative since they required further specification of the tools and
platforms, along with the research in the Work Packages (WP) are now clearer.
However, the document will further updated according to findings project
developments since it is considered a living document that needs to be updated
over the course of the project when needed, or on-demand, when changes in
policies by the EU Commission or the local authorities or when a new
innovation potential has been identified. The RESIST Executive Board has the
responsibility to decide on changes.
## Structure of the deliverable
The deliverable structure is as follows:
The first chapter offers a general introduction describing the participation
on the Open Research Data Pilot, the objectives of the data management plan,
the structure of the deliverable and the intended readership.
Chapter 2 offers a _current_ brief descriptive summary of the project data,
linked to the scope and relative objectives.
Chapter 3 sketches out the guidelines on FAIR data management and deals with
how the RESIST consortium will make sure that the data generated or used
follow the principles of FAIR data.
Chapter 4, discusses the allocation of resources related to the Open Research
Data Pilot.
Next, on Chapter 5 deals with data security matters and chapter 6 addresses
ethical issues concerning data management with reference to the deliverables
considering ethics.
Chapter 6 deals with data security provisions with respect to any data that
will be used in RESIST.
Chapter 7 is devoted to data protection in general describing the general
principles, matters regarding the data subject, cybersecurity and privacy
related information, and measures to be taken by RESIST.
Chapter 8 deals strictly with the protection of personal data in RESIST.
Chapter 9 concludes deliverable scope and main outcomes.
Finally, in the annexes section, presents the list of public deliverables.
## Intended readership
This deliverable is intended for a public use aiming primarily at the
participants of RESIST, to inform them on how data are being collected,
stored, documented, shared and reused in the project’s lifecycle. In addition,
from a general dissemination strategy point of view, it may also be useful for
other H2020 projects
# RESIST Data Summary
This section aims at providing an updated overview of the data collected and
produced in the context of RESIST, including details about:
* The purpose of the data collection/generation and its relation to the objectives of the project
* The types and formats of data that the project will generate/collect
* The re-usability of any existing data
* The origin of the data
* The expected size of the data
* The data utility
In RESIST, data collection and generation are key aspects in the sense that
the major project’s objective is to develop an integrated, interoperable and
scalable safety/security platform to offer high levels of resilience and
secure the nearly seamless transport operation in the event of critical
structures suffering all types of extreme physical, natural and man-made
incidents and cyber-attacks.
It networks in a unified manner the targeted groups: transport control room
operators, first responders and citizens/users to enable gathering, processing
and disseminating information for alternative planning in order to speed up
communication in real time, a factor that contributes to the seamless
transport operation.
This materializes in a multi-level approach which includes tools and
technologies for designing both preventive and predictive strategies for
transport network resilience in terms of vulnerability and predictive analysis
and risk assessment, as well as reactive ones in terms of emergency secure
communication and on demand rapid and accurate robotic in-depth structural
damage inspection of critical transport structures after disaster for offering
situation awareness to the control and (re)routing options to the users.
Following comes an analysis of the data types utilized by RESIST and their
origin. Their use will be also presented briefly in the sections that follow,
in a data type-level by the tool type/component that will be developed.
## Time-Dependent Degradation Models
Time-dependent degradation models will be developed for metallic
bridges/tunnels exposed to ageing and climate changes (temperature, humidity),
focusing first on the degradation of coating and then on the ensuing evolution
of corrosion as coating becomes ineffective. Corrosion progress will influence
the capacity of concrete members through reductions in rebar cross-sections
and of metallic members through reduced thickness. Thus, the final output will
be reduced rebar cross-section in concrete members and reduced thickness in
metallic members.
_**Table 1: Time-Dependent Degradation Models data description** _
<table>
<tr>
<th>
**Data Category**
</th>
<th>
**Description**
</th>
<th>
**Type**
</th> </tr>
<tr>
<td>
t
</td>
<td>
time of corrosion initiation after commissioning
</td>
<td>
double
</td> </tr>
<tr>
<td>
tlim
</td>
<td>
time limit for calculation of chloride coefficient of diffusion
</td>
<td>
double
</td> </tr>
<tr>
<td>
C 0
</td>
<td>
chloride concentration on the external face of the concrete
</td>
<td>
double
</td> </tr>
<tr>
<td>
w/c
</td>
<td>
Water to cement ratio
</td>
<td>
double
</td> </tr>
<tr>
<td>
HR
</td>
<td>
Relative humidity
</td>
<td>
double
</td> </tr>
<tr>
<td>
yco2
</td>
<td>
CO2 concentration in atmosphere
</td>
<td>
double
</td> </tr>
<tr>
<td>
T
</td>
<td>
Mean temperature after
commissioning
</td>
<td>
double
</td> </tr>
<tr>
<td>
A s (t)
</td>
<td>
Remaining resistant steel area
</td>
<td>
double
</td> </tr>
<tr>
<td>
ε'su
</td>
<td>
Ultimate strength of steel
</td>
<td>
double
</td> </tr>
<tr>
<td>
t cr
</td>
<td>
time from corrosion initiation and first crack opening
</td>
<td>
double
</td> </tr>
<tr>
<td>
wcrack
</td>
<td>
crack width
</td>
<td>
double
</td> </tr>
<tr>
<td>
texp
</td>
<td>
Time of rebar exposure
</td>
<td>
double
</td> </tr>
<tr>
<td>
SP w
</td>
<td>
Width of spalling
</td>
<td>
double
</td> </tr>
<tr>
<td>
SP d
</td>
<td>
Depth of spalling
</td>
<td>
double
</td> </tr>
<tr>
<td>
tleach
</td>
<td>
Time of leaching exposure
</td>
<td>
double
</td> </tr>
<tr>
<td>
A l
</td>
<td>
Concrete area where leaching is detected
</td>
<td>
double
</td> </tr>
<tr>
<td>
Stiff_red
</td>
<td>
Stiffness reduction after leaching
</td>
<td>
double
</td> </tr>
<tr>
<td>
f_red
</td>
<td>
Concrete resistance reduction Cross section dimensions
</td>
<td>
double
</td> </tr>
<tr>
<td>
various
</td>
<td>
Original Rebars diameter
</td>
<td>
double
</td> </tr>
<tr>
<td>
d 0
</td>
<td>
Original concrete cover thickness
</td>
<td>
double
</td> </tr>
<tr>
<td>
c 0
</td>
<td>
Carbonation exposure class [C1-C4]. This input determines the corrosion
current density icorr_20 (in mA/cm2) for t=20°.
</td>
<td>
index [1-4]
</td> </tr>
<tr>
<td>
Cn
</td>
<td>
Chloride concentration
</td>
<td>
double
</td> </tr> </table>
**The purpose of the data collection/generation and its relation to the
objectives of the project:**
The collected data are geographic (region, temperature, humidity) and
structural (structure of the bridge and data about its structural elements,
geometrical data of cracks and defects) and shall be input in the module in
order to create a degradation model, which will calculate the deterioration
effect on the structural elements.
The types and formats of data that the project will generate/collect: The
result of the project is the model itself, which is a mathematical tool and
which will be used in the structural assessment of the bridge and, therefore,
in the integrated RESIST platform.
**The re-usability of any existing data:**
Data from previous projects will be used. Specifically, the model will be
based on other models already created, due to similarities to other projects,
such as AEROBI, 2015, which focuses on bridges, and ROBO-SPECT, 2013, which
aims at tunnels.
**The origin of the data:**
The generation and collection of data will be from the End users and the
Literature. The model will be based on data extracted, by the bridge and
tunnel handlers, information about the area, in order to specify the regional
climate conditions, previous projects with similar data.
**The expected size of the data:**
The data that will be collected, are expected to have a total size of tens of
KB.
**The data utility:**
The data will be used in the deterministic structural vulnerability assessment
of bridges and tunnels exposed to physical attacks, a crucial part in order to
estimate the defects of the bridge and evaluate the dangers of each structure.
## Structural Vulnerability Assessment and Strengthening Intervention
Planning Tool
This Tool will help bridge/tunnel operators plan structural strengthening
interventions in order to increase resilience to specific attacks while the
results will be integrated with the BIM system for road management. A database
will be holding the construction data, relevant information from bridge/tunnel
inspections, data about the environment and operation and data on the
strengthening options. The User Interface will provide visualization of the
results, as well as current and historical data from the database. Data will
be visualized in a number of ways. Bridge components, as well as cross-
sections in these components, will change color to indicate high
vulnerability.
In the case of an incident, data on the damage and safety of structural
components and the whole structure will be sent to the crew on the scene in a
textural and graphical manner.
This Tool will enable the connection of external data
providers/databases/tools with suitable interfaces or by suitable data
exchange formats. There will be a strong commitment to interoperability.
Through this Tool the user will be able to estimate:
The internal forces and external loads applied on every point of the
bridge/tunnel members and the cross-section, component and system probability
of failure at the time of the inspection and at future times under extreme
events
The strengthening options to increase bridge/tunnel resilience to the specific
physical attacks under study.
The evolution with time of bridge damage and probability of failure in case of
monitored structures under impact.
The damage, shoring and demolition needs for each component and the overall
bridge/tunnel structure based on RPAS emergency inspection measurements of the
damaged structure.
The functionality of the damaged bridge/tunnel.
The required repair time and cost after a specific physical attack.
_**Table 2: Structural Vulnerability Assessment and Strengthening Intervention
Planning** _
_**Tool data description** _
<table>
<tr>
<th>
**Data Category**
</th>
<th>
**Description**
</th>
<th>
**Type**
</th> </tr>
<tr>
<td>
Input
</td>
<td>
Bridge/ Tunnel Geometry
</td>
<td>
String/ Double
</td> </tr>
<tr>
<td>
Input
</td>
<td>
Material properties
</td>
<td>
Double
</td> </tr>
<tr>
<td>
Input Crack
</td>
<td>
* Position (x,y,z)
* Type of Crack (Structural , Corrosion )
* Length (m)
* Max Depth (m)
* Max Width (m)
* Angle (Degrees )
* Multi Cracks (0,1,2,3)
* Distance Between multi
* Selfie of the crack
</td>
<td>
Double, image format (png/jpg)
</td> </tr>
<tr>
<td>
Beam deflection
</td>
<td>
Distance of 3 Fixed position (m)
</td>
<td>
Double
</td> </tr>
<tr>
<td>
Pier Inclination
</td>
<td>
dx , dy
</td>
<td>
Double
</td> </tr>
<tr>
<td>
Defects
</td>
<td>
* Type of Defect (Spalling, Leaching, exposed rebar's)
* Position (x,y,z)
* Photo
</td>
<td>
String
Double
Image format (png/jpg)
</td> </tr> </table>
**The purpose of the data collection/generation and its relation to the
objectives of the project:**
The structures’ geometry is crucial in order to create the 3D model, which
will contain any type of defect (cracks, deflection, spalling, inclination…),
as given by the inspections. Through this tool, the user should be able to
estimate the functionality of the bridge/tunnel and the user shall be able to
estimate the evolution of damage, the strengthening options as well as the
required repair time and cost after a specific physical attack.
**The types and formats of data that the project will generate/collect:**
This project will produce the structural vulnerability assessment tool, which
can export data either in text format or in more graphical ways (images, 3D
drawings of damage in the bridge).
**The re-usability of any existing data:**
There are no existing data, which will be used in this tool.
**The origin of the data:**
The designs of the structures will be provided from the operators of the
structures, and the rest of the data from other modules in the project.
**The expected size of the data:**
The collection of data is expected to have a total size of tens of Gigabytes.
**The data utility:**
The data produced in this tool are crucial to the integrated RESIST platform
and, therefore, to the end-user of the project’s platform.
## Multirotor Flying Robot
The particular needs of sensor placement in RESIST by means of aerial robots
equipped with manipulators and the indoor navigation in tunnels will require a
new aerial robot design for the contact and non-contact inspection including
the accuracy, range of operation, duration of flight, required payload, safety
and regulations. Mechatronic concepts will be applied to design the platform,
arm and integrated control system, while the aerodynamic effects near surfaces
of bridges, and particularly tunnels, will be considered. Effort will be
devoted to increase the TRL of the RPAS and provide a solution that not only
solves technological challenges, but is also operationally efficient
(transportation, time to deploy, etc.). The robot will be integrated with all
the necessary sensors for navigation around bridges and in tunnels, and with a
high quality video camera to obtain information of the affected area and with
a laser ground station for high precision measurements of the structure.
_**Table 3: Multirotor Flying Robot data description** _
<table>
<tr>
<th>
**Data Category**
</th>
<th>
**Description**
</th>
<th>
**Type**
</th> </tr>
<tr>
<td>
Logged telemetry
</td>
<td>
This dataset consists of a log data file in text file format or .csv. This log
data will contain all the telemetry information of the drone autopilot
(position, velocity, timestamp, state of drone, etc.). It will be used to
debug and improve the drone control and navigation system. The data is stored
on-board in the flying robotic system.
It will also contain sensor values.
</td>
<td>
Text files in text file format or .csvs.
Sensor values will be float numbers.
A readme file describing the structure and meaning of each of the columns of
the log files will accompany each file. Files will be structured by date and
time of the experiment.
</td> </tr>
<tr>
<td>
Images
</td>
<td>
This dataset consists of image and video data of landscapes of the structure
being inspected. This data will be captured using the integrated sensors in
the drones during the experiments within the RESIST project.
</td>
<td>
The format of this data dataset consists of raw sensor data. Metadata: The
dataset will be accompanied by text files describing if and where a drone is
located in the respective image/video data (metadata). This metadata will be
captured to be used in image recognition algorithms.
The format of these text files has not been established yet
</td> </tr> </table>
**The purpose of the data collection/generation and its relation to the
objectives of the project:**
Logged telemetry data collection in relevance with the multirotor flying robot
will have to do with the validation and debugging of the RPAS functionalities
developed in WP4. Other data collected will be used in a way to validate the
inspection-related functionalities that will be offered, like automated
processing of images.
**The types and formats of data that the project will generate/collect:**
Generation and collection of data regarding the RPAS are divided into two main
categories.
* **Telemetry** : Under this category, data will be focused on measurements such as position, attitude, status, etc. Text files in .txt format or .csv file. A readme file describing the structure and meaning of each of the columns of the log files will accompany each file. Files will be structured by date and time of the experiment.
* **Images** : The collection of images directly from the on-board cameras will be offered under this task. The format of this data dataset consists of raw sensor data. Metadata: The dataset will be accompanied by text files describing if and where a drone is located in the respective image/video data (metadata). This metadata will be captured to be used in image recognition algorithms. The format of these text files has not been established yet
**The re-usability of any existing data:**
Existing data from previous flights will be analyzed for calibration and
tuning of RPAS controllers and estimators. No further re-use of existing data
is anticipated.
Any archived images that will be collected (after the project’s beginning)
directly from the onboard cameras is expected to be reused in the lifecycle of
the project.
**The origin of the data:**
The generation and collection of data will be from the RPAS’s on-board sensors
such as inertial units, localization systems, camera, etc.
**The expected size of the data:**
* **Telemetry:** The measurements that will be offered by the on-board sensors, are expected to have a total size of tens or a few hundreds of Mb.
* **Images:** The collection of images is expected to have a total size of tens of Gigabytes. **The data utility:**
The data collection will help in the validation of the RPAS functionalities
with a GPS-free localization and the onsite inspection-related
functionalities.
## Photogrammetric Computer Vision System
The photogrammetric computer vision system will compute 3D measurements from
images of bridges/tunnels taken by the aerial robot. It will consist of a
software implementation and a camera system prototype for integration with the
UAV. 3D data will be computed by a structure-from-motion and dense stereo
algorithm. The system will be targeted to fulfil the specified requirements in
precision and completeness of the measurements and will be developed in such a
way, that it will integrate additional sensor measurements from the robotic
platform for robustness and efficiency.
Research will be performed to advance the state-of-the-art in terms of
accuracy, reliability and versatility and suitability to the target
application.
_**Table 4: Photogrammetric Computer Vision System data description** _
<table>
<tr>
<th>
**Data Category**
</th>
<th>
**Description**
</th>
<th>
**Type**
</th> </tr>
<tr>
<td>
Stereo images
</td>
<td>
These are recorded images from the sensors
</td>
<td>
png/jpg
</td> </tr> </table>
**The purpose of the data collection/generation and its relation to the
objectives of the project:**
Data collection will be used to compute 3D models and measurements of
bridges/tunnels.
**The types and formats of data that the project will generate/collect:**
The photogrammetric computer vision system will collect image data.
**The re-usability of any existing data:**
The software will be tested and evaluated on existing public datasets.
Any archived images that will be collected (after the project’s beginning)
directly from the photogrammetric computer vision system is expected to be
reused in the lifecycle of the project.
**The origin of the data:**
The generation and collection of data will be from the camera sensors.
**The expected size of the data:**
The collection of images is expected to have a total size of tens of
Gigabytes.
**The data utility:**
The data collection will help with the inspection of the observed structures,
especially in regard to detecting damages.
## Ultrasonic Sensors
An ultrasonic sensor system will be developed to measure the Modulus of
Elasticity of concrete and steel. The ultrasonic sensors in will measure the
depth and width of cracks and the rebar diameter. To achieve sufficient
accuracy in the surface velocity measurements needed for evaluating the
Modulus of Elasticity, laboratory tests will be dedicated to the dependence of
the measurement accuracy on the stability of the contact between the
ultrasonic transducers and the material, and on the applied force on the
transducers/material interface, possibly investigating different interface
materials in order to enhance ultrasonic transmission towards the measured
structure. For crack width/ rebar diameter and void detection, optical-
acoustical devices will be used.
_**Table 5: Ultrasonic Sensors data description** _
<table>
<tr>
<th>
**Data Category**
</th>
<th>
**Description**
</th>
<th>
**Type**
</th> </tr>
<tr>
<td>
Crack depth
</td>
<td>
Measured crack depth in mm unit
</td>
<td>
Double
</td> </tr>
<tr>
<td>
Crack width
</td>
<td>
Measured crack width in mm unit
</td>
<td>
Double
</td> </tr>
<tr>
<td>
Elasticity Modulus
</td>
<td>
Measured Elasticity modulus in GPa unit
</td>
<td>
Double
</td> </tr>
<tr>
<td>
Rebar/Void position
</td>
<td>
Position of void or rebar with respect to concrete surface in mm unit
</td>
<td>
Double
</td> </tr>
<tr>
<td>
Rebar diameter
</td>
<td>
Measured diameter of rebar in mm unit
</td>
<td>
Double
</td> </tr> </table>
**The purpose of the data collection/generation and its relation to the
objectives of the project:**
All the reported measured parameters are fundamental in the structural health
monitoring of infrastructure and are required for in-depth structural
assessments of steel and concrete bridges/tunnels. They are strictly related
to the RESIST objective 2 (Remotely Piloted Aircraft System (RPAS) for
Inspection and Sensors’ Mounting to Critical Transport Infrastructures).
**The types and formats of data that the project will generate/collect:**
All the data are double in 32 bit format.
**The re-usability of any existing data:**
In the structural health monitoring of infrastructure is important to evaluate
the evolution of the cracks under inspection (for example analyzing variations
of cracks in term of depth and width) comparing the new acquired data with
previous measurements.
**The origin of the data:**
The parameters reported in Table 5 are measured by the ultrasonic measurement
systems developed from CNR. In particular, the crack depth, the elasticity
modulus and the rebar/void parameters are derived by using a custom
pulser/receiver ultrasonic unit controlled by using USB communication from the
drone platform. The crack width is instead measured by using an innovative
opto-acoustic ultrasonic module controlled always by USB.
**The expected size of the data:**
The data size for the ultrasonic sensor is quite limited and dependent on the
number of cracks and rebars/voids inspected. For every set of measurements the
overall data will be less than 1 Kbytes. The data size will sensibly increase
considering in addition to the essential measurement results, also the
detailed data related to the measurement itself, as the single waveforms
detected by the sensors, from where the results are derived, remaining anyway
widely below the MB range.
**The data utility:**
The measured parameters will be exploited for the assessment and management of
critical structures under extreme events.
## Cognitive Computer Vision System
The legacy system for the identification and assessment of defects in concrete
bridges developed in AEROBI, will be extended to cover concrete tunnels and
steel bridges and tunnels. The focus will be to broaden the scope of
applicability of visual inspection workflows We are also currently exploring
was to improve the runtime of the system. Application specific system tasks,
performance requirements and contextual domain models will be mapped to verify
and enhance their hybrid vision architecture that exploits model-based and
data-driven components. Furthermore, simulation tools will be enhanced for
data generation and training and will be used in conjunction with real-data to
optimize the system design. Thus, the goal will be to bring more quantitative
understanding of modern machine learning architecture performance and to
enable algorithm designs with ability to operate on reduced amount of data.
_**Table 6: Cognitive Computer Vision System data description** _
<table>
<tr>
<th>
**Data Category**
</th>
<th>
**Description**
</th>
<th>
**Type**
</th> </tr>
<tr>
<td>
Images
</td>
<td>
Images from different runs of UAV
</td>
<td>
JPEG image
</td> </tr>
<tr>
<td>
Meta-data
</td>
<td>
Meta data (timestamp , sensor internal and external pose parameters)
</td>
<td>
XML
</td> </tr>
<tr>
<td>
3D data scans
</td>
<td>
3D data from Photogrammetry system used in visual inspection and
measurement
</td>
<td>
3D reconstruction/Point clouds from Photogrammetry system
</td> </tr> </table>
**The purpose of the data collection/generation and its relation to the
objectives of the project:**
We will collect data that will enable the testing, validation and benchmarking
of performance of the visual inspection system for crack and defect
classification. Specifically, the data will include images and videos taken
from multi-copter runs with various vantage points and scales. This data will
be the basis for offline training of the classification system, as well as
benchmarking of the visual inspection system. Manual ground truth of specific
measurements will be included as annotations in order to enable the
benchmarking and assessments of error characteristics of the measurements.
**The types and formats of data that the project will generate/collect:**
The video and image data collection and annotations along with metadata will
use readily available standards (e.g. MPEG, XML) and will be available for the
scientific community for evaluation of alternative inspection methods.
**Naming conventions** : Images and xml meta-data match in names (with changed
extension).
Image sets are organized in directories with each directory representing a
drone trial run. Within a given directory, the sequence of acquired images
have a running index.
**Standards for Meta Data creation** :
We provide a full specification of our metadata formats following the XSD (XML
Schema Definition) standard. This specification is a collection of XML files
itself, but can also be rendered as image for better visualization. When we
need to update our formats, we use semantic versioning, being the last version
the currently valid one. The version is specified in the corresponding name of
the XSD file.
On the top of the aforementioned, we follow for the annotations the standards
in the literature and adopt the ImageNet and Pascal VOC xml (human-readable
and parse-able) annotation formats as is the de-facto standard in the computer
vision machine learning domain.
**The re-usability of any existing data:**
Visual inspection data from past inspections done in the context of the
recently concluded EU H2020 project AEROBI will be used for early assessment
of vision algorithm performance and for machine learning training. This
includes data from the aerial robot is collected from multiple field trials in
Seville, Thessaloniki. The image data collected from these field trials is
roughly in the order of several Gigabytes. Image data along ground-truth
annotations are used for training machine learning algorithms.
**The origin of the data:**
The generation and collection of data will be from the UAV camera and sensors.
**The expected size of the data:**
The image data collected from these field trials is roughly in the order of
several Gigabytes.
**The data utility:**
The data collection will help with the inspection of the observed structures,
especially with regard to detecting damages. Image data along ground-truth
annotations will be used for training and validating machine learning
algorithms.
## Radiometric Sensor
The radiometric sensor will be integrated on the RPAS so that it is fully
controlled by the robotic system. It will require the development of an
interface between the robotic system and the sensing device, as well as
configuration of the data interfaces for the data communication and exchanges.
The selected sensor is a commercially available, lightweight and small Ground
Penetrating Radar (GPR) that will be measuring the thickness and internal
features of the structural elements, such as rebars, min rebar cover etc. The
radiometric sensor interfacing that will be developed will also include system
triggering and measurement data acquisition to enable full automation of the
system at the required bridge/tunnel positions fully controlled by the RPAS.
_**Table 7: Radiometric Sensor data description** _
<table>
<tr>
<th>
**Data Category**
</th>
<th>
**Description**
</th>
<th>
**Type**
</th> </tr>
<tr>
<td>
GPR
</td>
<td>
Data obtained from GPR sensor and location coordinates (global or relative)
</td>
<td>
Struct with location and GPR measurements at each depth
(floats)
</td> </tr> </table>
**The purpose of the data collection/generation and its relation to the
objectives of the project:**
Data will be collected from the radiometric sensor for internal structural
analysis of the elements being inspected. The relative location of the
measuring point with respect to the structure reference system will also be
collected.
Data collection in relevance with the multirotor flying robot will have to do
with the validation of the RPAS functionalities. Other data collected will be
used in a way to validate the inspection-related functionalities that will be
offered, like automated processing of images.
**The types and formats of data that the project will generate/collect:**
The main data will be the measurements of the radiometric sensor that will
consist in a struct or vector/matrix of float values. The location (three
float values) and possibly the orientation angles of the sensor device during
the measurement (three float values) will also be collected. **The re-
usability of any existing data:**
Existing data from previous inspections could be used jointly with the actual
measurements for the structural analysis.
**The origin of the data:**
The generation and collection of data will be from the RPAS onboard
radiometric sensor.
**The expected size of the data:**
The measurements that will be offered by the on-board sensors, are expected to
have a total size of tens or a few hundreds of Mb.
**The data utility:**
The data collection will help in the structural analysis of the elements being
inspected.
## Vibration sensors
Vibration sensors for monitoring of structures will be implemented using
commercial MEMS accelerometers. The sensor modules will be characterized by
light weight and low-power in order to be suitable for being attached to the
structure by the RPAS and communicate in wireless mode with the control
center. They will have the ability to recognize the exceedance of thresholds.
_**Table 8: Vibration sensors data description** _
<table>
<tr>
<th>
**Data Category**
</th>
<th>
**Description**
</th>
<th>
**Type**
</th> </tr>
<tr>
<td>
Acceleration X
</td>
<td>
Acceleration measured in the x-axis expressed in mg unit
</td>
<td>
Double
</td> </tr>
<tr>
<td>
Acceleration Y
</td>
<td>
Acceleration measured in the y-axis expressed in mg unit
</td>
<td>
Double
</td> </tr>
<tr>
<td>
Acceleration Z
</td>
<td>
Acceleration measured in the z-axis expressed in mg unit
</td>
<td>
Double
</td> </tr> </table>
**The purpose of the data collection/generation and its relation to the
objectives of the project:**
The vibration measurement is related to the RESIST objective 2 (Remotely
Piloted Aircraft System (RPAS) for Inspection and Sensors’ Mounting to
Critical Transport Infrastructures) where is described the possibility to
mount sensor modules on other structure under inspection for monitoring.
**The types and formats of data that the project will generate/collect:**
All the acceleration data are double in 32 bit format.
**The re-usability of any existing data:**
No re-use of existing data is anticipated.
**The origin of the data:**
The vibration is measured by using a sensor module mounted on the structure by
exploiting a tri-axial accelerometer and wireless controlled from the ground
main station.
**The expected size of the data:**
The size of the data will be dependent on the desired frequency of the
vibration measurements and it will be small for intermittent measurements and
large for continuous monitoring.
**The data utility:**
The vibration measurement will be exploited for the assessment and management
of critical structures under extreme events.
## Ground Control Station
A fully operational Ground Control Station will be developed that will have
the following main characteristics:
1. can be easily transported to a specific area
2. can be deployed in short time
3. is integrated with the Control Centre and compatible with the communication architecture of RESIST
4. allows a friendly interface to specify navigation commands to the aerial robot taking into consideration the specifics of bridge and tunnel inspections
5. offers the operator the required information to perform the different needed inspections
6. offers the required interfaces to manage the complete system (mission sensors, navigation system, alarms, etc.) in an integrated way.
_**Table 9: Ground Control Station data description** _
<table>
<tr>
<th>
**Data Category**
</th>
<th>
**Description**
</th>
<th>
**Type**
</th> </tr>
<tr>
<td>
Flight plan
</td>
<td>
This dataset consists of all the information relative to the planning of the
different missions for the aerial robotic vehicle. This includes the waypoints
position information in GPS or local coordinates, velocity reference, heading,
action for each waypoint, actions between waypoints, camera orientation, and
etcetera.
</td>
<td>
All the information is stored in a text file.
A readme file describing the structure and meaning of each of the columns of
the log files will accompany each file. Files will be structured by date and
time of the experiment.
</td> </tr>
<tr>
<td>
Satellite, terrain and 3D maps
</td>
<td>
This dataset consists of satellite and terrain maps that can be downloaded
from third party public servers. Furthermore, 3D maps generated in the project
could be used as well in the GCS.
</td>
<td>
This information is stored in any image format with its related metadata which
indicates the global position of the image in the world and the zoom level of
the visualizer. This applies to the satellite and terrain images. The 3D map
will be stored using images and a text file with the position of each point of
the point cloud.
</td> </tr>
<tr>
<td>
Logged telemetry
</td>
<td>
This dataset consists of the logged telemetry downloaded from UAV for its
analysis on the ground control station. This log data in text file format. It
will contain all the telemetry information of the drone autopilot (position,
velocity, timestamp, state of drone, etc.). It will be used to debug and
improve the drone control and navigation system. The data is stored on-board
in the flying robotic system.
</td>
<td>
Text files in text file format.
A readme file describing the structure and meaning of each of the columns of
the log files will accompany each file. Files will be structured by date and
time of the experiment.
</td> </tr> </table>
**The purpose of the data collection/generation and its relation to the
objectives of the project:**
The purpose of collecting the first two data categories (flight plan and maps)
is for the correct operation of the ground control station. All that data is
needed in order to be able to create, modify or for monitoring any mission for
the aerial robotic system that is going to be developed in the RESIST project.
**The types and formats of data that the project will generate/collect:**
There will be 3 types of data related to the ground control station developed
in the task 4.6 of the RESIST project. The types of data and their format is
defined as follows:
* **Flight plan:** All the information is stored in a text file. A readme file describing the structure and meaning of each of the columns of the log files will accompany each file. Files will be structured by date and time of the experiment.
* **Satellite, terrain and 3D maps:** This information is stored in any image format with its related metadata which indicates the global position of the image in the world and the zoom level of the visualizer. This applies to the satellite and terrain images. The 3D map will be stored using images and a text file with the position of each point of the point cloud.
* **Logged telemetry:** Text files in text file format. A readme file describing the structure and meaning of each of the columns of the log files will accompany each file. Files will be structured by date and time of the experiment.
**The re-usability of any existing data:**
The flight plan data will be reused as far as it is needed to repeat the same
inspection. This could be because the system is being debugged, or on the
contrary it is working correctly, and different inspections are needed during
the duration of the project or even after the end of it.
The data related to the satellite and terrain public maps will be stored and
reused if several flights on the same area are expected. If this is not the
case that data can be deleted and downloaded again from public servers in case
it is needed.
The 3D maps of the bridge will be re-used during the duration of the project
for testing multiple times in the same bridge. As the 3D map is provided by a
third-party system and not generated in the GCS, it can be deleted once it is
no longer going to be used.
There is no re-usability expected for the logged telemetry because it will be
only downloaded to the GCS for its analysis in order to debug the system.
**The origin of the data:**
The flight plan data will be generated by the user directly in the GCS. This
data can be fully manually generated or assisted by a piece of software that
helps in the generation of the different waypoints that could be repetitive
and easy to program.
The data related to the satellite and terrain public maps is downloaded from
public servers with this type of data such as Google Maps.
The 3D maps of the bridge/tunnel are provided by a third-party system and not
generated in the GCS.
The origin of the logged telemetry is the on-board computers of the aerial
robot.
**The expected size of the data:**
* **Flight plan** : This data is expected to have a total size of tens or a few hundreds of Kb.
* **Satellite and terrain public maps** : The collection of images is expected to have a total size of tens of Gigabytes.
* **3D map and telemetry data** : These 2 types of data are expected to have a total size of tens or a few hundreds of Mb.
**The data utility:**
This data collection will help in the fast and reliable validation of the RPAS
functionalities with a GPS-free localization and the onsite inspection-related
functionalities.
## Cyber Security Mechanism Infrastructure Level and Endpoint Level
The mechanisms will focus on the security of the transport infrastructure’s
critical IT assets, as well as the RESIST platform itself and its emergency
communications. Emphasis will be given in guaranteeing the security of the
RPAS’ communication subsystems, in order to satisfactorily address security
concerns (such as jamming and control takeover attacks), which have,
typically, been overlooked before introducing drone technologies in the
various applications domains. Moreover,
In terms of the monitoring elements that will offer the real-time view of the
security posture, this involves collecting data from two levels of sources:
network infrastructure level and endpoint level. Network infrastructure level
datasets allow to observe threats in a wider scope, whereas endpoint datasets
are mostly application specific, allowing us to analyze in depth threats
specific to a certain application type. Moreover, new security mechanisms will
be developed to characterize different types of data flows using anomaly
detection techniques based on different theoretical backgrounds, learning
methods (e.g., clustering) or spatial and temporal signal processing. This
includes activities such as network traffic data analysis from the
infrastructure and end-point layers.
Automated rating and classification mechanisms based on both internal and
external knowledge will be developed. The results of the refinement process
will be used to update access control policies, while a survey of the existing
policy enforcement points, as well as identification of new ones will be
performed.
_**Table 10: Cyber Security Mechanism Infrastructure Level and Endpoint Level
data description** _
<table>
<tr>
<th>
**Data Category**
</th>
<th>
**Description**
</th>
<th>
**Type**
</th> </tr>
<tr>
<td>
Data gathered/generated at the network level
</td>
<td>
</td> </tr>
<tr>
<td>
Network flow events
</td>
<td>
Events triggered by examining the packet traffic.
</td>
<td>
LOG/RICH TEXT
</td> </tr>
<tr>
<td>
Network management events
</td>
<td>
Events triggered by examining network related operations.
</td>
<td>
LOG/RICH TEXT
</td> </tr>
<tr>
<td>
Firewall event and logs
</td>
<td>
Events and auditing information for inbound/outbound traffic of the firewall.
</td>
<td>
LOG/RICH TEXT
</td> </tr>
<tr>
<td>
Intrusion Detection
System (IDS)
events and logs
</td>
<td>
Events and auditing information acquired by the IDS when inspecting the
traffic with a set of rules or when an anomaly occurs.
</td>
<td>
LOG/RICH TEXT
</td> </tr> </table>
<table>
<tr>
<th>
**Data Category**
</th>
<th>
**Description**
</th>
<th>
**Type**
</th> </tr>
<tr>
<td>
Deep Packet inspection (DPI) results.
</td>
<td>
Detailed examination results of packet thorough analysis.
</td>
<td>
LOG/RICH TEXT
</td> </tr>
<tr>
<td>
Security Testing
results
</td>
<td>
Results generated through the security assessment of RESIST assets.
</td>
<td>
XML
</td> </tr>
<tr>
<td>
Monitoring events
</td>
<td>
Events captured by the event captors.
</td>
<td>
XML
</td> </tr>
<tr>
<td>
**Data gathered/generated in the application level**
</td> </tr>
<tr>
<td>
Operating System
calls/forensics
</td>
<td>
Auditing information on operating system’s operations, like process control,
file manipulation, information maintenance and communication. Also, forensic
data, such as registry files for windows systems or /etc, /var/log files for
Linux distributions.
</td>
<td>
LOG/RICH TEXT
</td> </tr>
<tr>
<td>
Full packet
capture/network forensics
</td>
<td>
Contain features such as Source IP, Destination IP, timestamps, protocol,
Payload and more.
</td>
<td>
PCAP
</td> </tr>
<tr>
<td>
Authentication events (at network & application levels)
</td>
<td>
Authentication actions that take place when a component authenticates at the
network or application level.
</td>
<td>
LOG/ RICH TEXT
</td> </tr>
<tr>
<td>
Authorization events (at network & application levels)
</td>
<td>
Authorization actions that take place when a component asks/gets authorization
to access a network or a specific application.
</td>
<td>
LOG/RICH TEX
</td>
<td>
T
</td> </tr>
<tr>
<td>
Event/System logs
</td>
<td>
Contain events that are logged by the operating system components. They may
contain information about changes done on devices, device drivers, system
changes, and more.
</td>
<td>
LOG
</td>
<td>
</td> </tr>
<tr>
<td>
Security Testing
results
</td>
<td>
Results generated through the security assessment of RESIST asset’s
</td>
<td>
XML
</td>
<td>
</td> </tr>
<tr>
<td>
Monitoring events
</td>
<td>
Events captured by the event captors. (e.g., for example a captor may monitor
the availability of a web server, and report if the availability changes below
a minimum value)
</td>
<td>
XML
</td>
<td>
</td> </tr>
<tr>
<td>
**Data managed in regard to resilient communication service may include**
</td>
<td>
**the following**
</td> </tr>
<tr>
<td>
Sensor Data (Time stamped values)
</td>
<td>
Data acquired by the sensor by measuring changes in their respective
environment (e.g. temperature sensor)
</td>
<td>
ARRAY FLOAT]
</td>
<td>
[TIMESTAMP,
</td> </tr>
<tr>
<td>
Weather
information/forecast data
</td>
<td>
Time stamped weather-related values.
</td>
<td>
ARRAY FLOAT]
</td>
<td>
[TIMESTAMP,
</td> </tr>
<tr>
<td>
Video Streams
</td>
<td>
Involve surveillance of the physical infrastructure and critical security
points.
</td>
<td>
VIDEO FORMAT (e.g. mp4, avi etc)
</td> </tr>
<tr>
<td>
**Data Category**
</td>
<td>
**Description**
</td>
<td>
**Type**
</td> </tr>
<tr>
<td>
RPAS telemetry
data
</td>
<td>
Data gathered remotely regarding the RPAS drones.
</td>
<td>
LIST [FLOAT1,…,FLOATn] n
= quantity of values
</td> </tr>
<tr>
<td>
Civilians communication data
</td>
<td>
Alert messages-voice and text messages.
</td>
<td>
AUDIO FORMAT & TEXT
</td> </tr> </table>
Cyber Security Mechanisms in RESIST will be employed in both Network and
application layer and they will generate and collect information; in the table
below we display this data alongside a short description and its expected
type.
**The purpose of the data collection/generation and its relation to the
objectives of the project:**
In the context of WP5 Cyber Security mechanisms will be developed/deployed
within the
RESIST framework. This is directly related to the RESIST objective 3, Cyber
Security Management Solutions (see RESIST Grant Agreement Annex A, Part B)
which aims to address 3 key aspects:
1. Provide enhanced data collection and analysis capabilities of cyber security related incidents
2. Provide awareness, trust, transparency and information sharing pertaining to such incidents and
3. Provide more efficient reaction to the detected incidents. In the context of RESIST Cyber Security management, data related to cybersecurity monitoring will be collected and analyzed (e.g. for anomaly detection) in order to enhance the efficiency and effectiveness in the detection and reaction to related incidents. Monitor mechanism may exist in both network and application level.
* **Cyber Security Management Solutions**
Regarding Cyber Security management, data related to cybersecurity monitoring
may be collected and analyzed (e.g. for anomaly detection) in order to enhance
the efficiency and effectiveness in the detection and reaction to related
incidents.
* **Alternative, Secure and Continuous Communications for Normal and Emergency Operations**
In the context of RESIST, a resilient communication node is planned that will
provide secure and continuous communication for critical elements of the
RESIST platform. This node will gather various types of data flows (from
sensor data to video streams) in order to route them accordingly.
**The re-usability of any existing data:**
No re-use of existing data is anticipated.
**The origin of the data:**
As mentioned above RESIST Cyber security mechanisms will be employed both in
the network and the application layers.
Origin of data at the Network layer
* Firewall events
* IDS alerts
* DPI events/results
* Honeypots alerts
* Network monitoring event
* Packet captures Flows
Origin of data at the Application layer
* Application logs
* System logs
* Security Application Testing results
* Application Monitoring events
* Authentication Events Authorization Events
**The expected size of the data:**
Regarding the logs of various security components, it’s hard to specify but
moderate to big high volume of data is expected. As for network traffic,
packets detected by the network monitors, its likely to grow considerable in
size. Moreover, authentication & authorization events produce low volume of
data. Monitoring events have low volume, usually a few KBs. In addition,
alerts generated from various security controls (ids, firewall, honeypots
etc.) produce low volume of data. Finally, security testing results also have
low volume of data, usually starting from 200KB size up to a few MBs.
**The data utility:**
Security related data: can be utilized in the cyber-intelligence domain, for
example a (e.g. full packet capture can be utilized in anomaly detection
related research to train anomaly detection models. Monitoring events and
security testing results will be used to assess the security posture of
RESIST.) Monitoring and security testing are utilizing other security related
data to conduct the security assessment, such as authentication and
authorization events, system and application logs etc.
## Mobility Continuity Module
The module for mitigating highways’ disruptions by providing personalized re-
routing options to users is under development and currently on its Alpha
version. It is a Multimodal Collaborative Decision Making (M-CDM) tool
providing re-routing advice to the same or different transport modes to the
users (in an individual/personalized manner). This tool makes use of available
maps of the transport network, potentially of high resolution, either provided
by the road operator or commercially available or from other sources like OSM
(Open Street Maps). This way a layer map architecture (similar to the Local
Dynamic Map used in automotive applications) can be deployed, including static
information in the lower layers (lanes, barriers, green areas, etc.), while
adding dynamic information on upper layers (traffic density, fog area, traffic
light phases, vehicle trajectories etc.). At the same time, interfaces to the
system will include various internal and external (to RESIST) data. An
integrated traffic simulation software (such as the SUMO), as part of Advanced
Transport Information Systems, appropriately configured, will be used to
combine the above information to actual highway routings, while combine other
modes’ availabilities and result in the actual outputs to the enduser that
will include individual suggestions of re-routing (via all available mode
routings). The traffic simulator is able to provide modelling of intermodal
traffic systems including road vehicles, public transport and pedestrians.
Appropriate interfacing to other modes’ portals/websites will ensure
availability of other modes’ routing options that will be finally combined
into one personalized solution adapted to individual highway operators.
_**Table 11: Mobility Continuity Module data description** _
<table>
<tr>
<th>
**Data Category**
</th>
<th>
**Description**
</th>
<th>
**Type**
</th> </tr>
<tr>
<td>
TripInformation
</td>
<td>
This is a class including aggregated information about each vehicle's journey
</td>
<td>
Complex object (DATEX II)
</td> </tr>
<tr>
<td>
UserInformation
</td>
<td>
This is a class including aggregated information about each user’s journey
preferences
</td>
<td>
Compex object (DATEX II)
</td> </tr>
<tr>
<td>
TrafficInformation
</td>
<td>
This is a class including aggregated information regarding the traffic of a
given lane.
</td>
<td>
Complex object (DATEX II)
</td> </tr>
<tr>
<td>
RoadNetworkMap
</td>
<td>
This is a map including the road network
</td>
<td>
OpenStreetMaps (OSM)
</td> </tr>
<tr>
<td>
RouteModeInformation
</td>
<td>
This is a definition of available modes of transport for public transportation
schedules and associated geographic information
</td>
<td>
General Transit Feed
Specifications (GTFS)
</td> </tr>
<tr>
<td>
TripSuggestion
</td>
<td>
This is a class including the proposed alternatives of a trip to a given end
user
</td>
<td>
Complex object
</td> </tr> </table>
**The purpose of the data collection/generation and its relation to the
objectives of the project:**
The data collected by end users and sensors will be used in order to analyze
traffic within the region of interest and to subsequently optimize travel
proposals made to end users. Therefore, through the collection and generation
of data itinerary alternatives, end user mobility continuity will be
addressed.
**The types and formats of data that the project will generate/collect:**
Traffic and data related data will follow interoperable and open standards
such as the Datex2.0 standard. RESIST formatted data will be used in order to
enable client-server based communications between the various components of
the platform.
**The re-usability of any existing data:**
Data collected by the mobility continuity platform could be re-used in order
to analyze traffic patterns within the region of interest. Such data will be
anonymized and will only consist a statistical analysis basis for any
subsequent use.
**The origin of the data:**
Data sources include personal mobile device and possibly on-road deployed
sensors.
**The expected size of the data:**
Data size will depend on the usage of the mobility continuity platform from
end users. The expected size should be within the GB range.
**The data utility:**
Data will be utilized in order to perform operations related to the mobility
continuity platform. Future use of anonymized data could include statistical
analysis of traffic data.
## Mobile Application
Suitable software for interfacing to mobile devices of highway users will be
developed to first collect personalized information from them and then provide
them with the re-routing options from the mobility continuity module. The
application will be executed by smartphones, handheld devices or other
desktop/web applications. Development of the mobile device application will
include design of a user friendly User Interface (front-end), as well as other
characteristics, such as data routing, security, authentication,
authorisation, working off-line, and service orchestration, whenever needed.
#### Table 12: Mobile Application data description
<table>
<tr>
<th>
**Data Category**
</th>
<th>
**Description**
</th>
<th>
**Type**
</th> </tr>
<tr>
<td>
TripInformation
</td>
<td>
This is a class including aggregated
information about each vehicle's journey
</td>
<td>
Complex object (DATEX II)
</td> </tr>
<tr>
<td>
UserInformation
</td>
<td>
This is a class including aggregated information about each user’s journey
preferences
</td>
<td>
Complex object (DATEX II)
</td> </tr>
<tr>
<td>
CurrentLocation
</td>
<td>
This is a class including geo-
coordinates of the current location of the user
</td>
<td>
Complex object
</td> </tr>
<tr>
<td>
TripSuggestion
</td>
<td>
This is a class including the proposed alternatives of a trip to a given end
user
</td>
<td>
Complex object
</td> </tr> </table>
**The purpose of the data collection/generation and its relation to the
objectives of the project:**
The mobile application will enable the collection of data from end users in
order to enable mobility continuity, highway driver behavior and others. Such
data is essential in providing personalized suggestions back to end users and
additionally enables the overall, social optimization of such recommendations.
**The types and formats of data that the project will generate/collect:**
Transport related data will follow standardized data models such as DATEX2.0.
Moreover data communications will be formatted according to the REST standard.
**The re-usability of any existing data:**
Data uploaded to remote RESIST modules such as the mobility continuity module,
could be re-used as per corresponding descriptions.
**The origin of the data:**
Data collected by the mobile applications will source from direct end user
data input, or on device sensors such as GPS, accelerometers etc.
**The expected size of the data:**
Data sourced by sensors or direct data depends on actual mobile application
usage. Due to potential heavy use of GPS, accelerometer sensors it could range
from MB’s to GB’s.
**The data utility:**
Data collected and generated by the mobile application will be used as input
to modules such as the mobility continuity module.
## Module for highway users’ behaviour
The model of highway users' behavior under stress will be developed that will
provide recommendations to the control center regarding speed limits and
recommendations to the control center and the Mobility Continuity Model
regarding communication type (advisory messages, cautions, warnings and
alerts, including samples of such communication) and presentation modality
(i.e., visual-symbolic, textual, auditory etc.) with the drivers.
It will be validated in by testing selected structured scenarios, and by
implementing usability methods (e.g. expert heuristic evaluation, use cases
and personas, etc.) to evaluate its effectiveness - how do users under stress
respond to the recommendations / messages.
#### Table 13: Module for highway users’ behaviour data description
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Data Category
</td>
<td>
</td>
<td>
Description
</td>
<td>
Type
</td> </tr>
<tr>
<td>
</td>
<td>
**Users’ demographic parameters**
</td>
<td>
</td> </tr>
<tr>
<td>
Age
</td>
<td>
</td>
<td>
Range
</td>
<td>
Numeric
</td> </tr>
<tr>
<td>
Gender
</td>
<td>
</td>
<td>
Male/Female/not willing to specify
</td>
<td>
Text
</td> </tr>
<tr>
<td>
Education
</td>
<td>
</td>
<td>
No education, Primary education, Secondary education, Further education
</td>
<td>
Text
</td> </tr>
<tr>
<td>
Preferred language
</td>
<td>
</td>
<td>
English or local language
</td>
<td>
Text
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Preferred notification voice
</td>
<td>
Male/Female
</td>
<td>
Text
</td> </tr>
<tr>
<td>
Country/region of origin
</td>
<td>
From a list
</td>
<td>
Text
</td> </tr>
<tr>
<td>
Country/region of residence
</td>
<td>
From a list
</td>
<td>
Text
</td> </tr>
<tr>
<td>
Living area characteristics
</td>
<td>
Rural/village, small town, suburban /city outskirts, urban/city/large town
</td>
<td>
Text
</td> </tr>
<tr>
<td>
Socioeconomic status
</td>
<td>
</td>
<td>
Low, Average, High
</td>
<td>
Text
</td> </tr>
<tr>
<td>
Occupation
</td>
<td>
</td>
<td>
Employed, Self-employed, Not employed, Unemployed
</td>
<td>
Text
</td> </tr>
<tr>
<td>
Marital status
</td>
<td>
</td>
<td>
Single, Living as married, Married, Separated or Divorced, Widowed, not
willing to specify, its complicated
</td>
<td>
Text
</td> </tr>
<tr>
<td>
Number of children
</td>
<td>
</td>
<td>
Family size, Number of Kin relations
</td>
<td>
Numeric
</td> </tr>
<tr>
<td>
Driving license
</td>
<td>
</td>
<td>
in years
</td>
<td>
Numeric
</td> </tr>
<tr>
<td>
Driving exposure
</td>
<td>
</td>
<td>
Less than 10,000Km annually More than 10,000Km annually
</td>
<td>
Text
</td> </tr>
<tr>
<td>
Driving frequency
</td>
<td>
</td>
<td>
Nearly daily, 1-4 times a week, 1-3 times a month, less than once a month
</td>
<td>
Text
</td> </tr>
<tr>
<td>
Frequency of public usage
</td>
<td>
transport
</td>
<td>
Nearly daily, 1-4 times a week, 1-3 times a month, less than once a month
</td>
<td>
Text
</td> </tr>
<tr>
<td>
Risk proneness:
1\. “How do you see yourself - how willing are you in general to take risks?
</td>
<td>
A 5 tick scale ranging from
(1)Never (2) rarely (3) sometimes (4) often (5) always
</td>
<td>
Numeric
</td> </tr>
<tr>
<td>
2\. “On a typical journey, how likely is it that you will be checked for
speeding?”
</td>
<td>
(1) Never (2) rarely (3) sometimes (4) often (5) always
</td>
<td>
Numeric
</td> </tr>
<tr>
<td>
3\. If you have experienced an extreme event on the road, how would you rate
your coping in such an event?
</td>
<td>
Extremely calm, calm, anxious, very anxious, panic
</td>
<td>
Text
</td> </tr>
<tr>
<td>
**Infrastructure and environmental parameters**
</td> </tr>
<tr>
<td>
Type of extreme events
</td>
<td>
bridge/tunnel, Man-made (e.g., severe traffic accident, terror attack…) /
Natural (e.g., extreme weather …), lighting conditions
</td>
<td>
Text
</td> </tr> </table>
**The purpose of the data collection/generation and its relation to the
objectives of the project:**
The module for highway users’ behavior will explore the impact of stressful
conditions on drivers’ effective operational capacity. The collected data will
include demographic parameters such as; age, gender, education, driving
experience, region (culture, percent of locals vs tourists or non-native
drivers). Furthermore, it will investigate the effect of the type of threat,
type of structure (bridge/tunnel), weather & lighting conditions, level and
type of traffic (e.g., public transportation modes, types of vehicles).
**The types and formats of data that the project will generate/collect:**
Publicly available numeric data and text will be the input of this module. The
output will be a properly formatted informative recommendations.
**The re-usability of any existing data:**
Publicly available data related to similar situations that have already been
investigated in previous EU research projects, will be used to identify the
relevant environmental and behavioral parameters and conceptually model the
users' behavior under stress. Additionally, data available from the end users
systems (TMS in the control center, demographic data from the tolls etc) will
be integrated to the module.
**The origin of the data:**
Data sources are Stakeholders, control center, end users, and the social media
interface tool. **The expected size of the data:**
Data size will depend on the number of road users and the real-time-related
information. The expected size should be within the Mb to GB range.
**The data utility:**
Data collected and generated by the module for highway users’ behavior will
provide recommendations to the control center regarding speed limits for the
threats under study, in case of drivers operating under high stress (e.g., due
to damaged bridges/tunnels) and recommendations to the control center and the
mobility continuity module regarding communication type (advisory messages,
cautions, warnings and alerts) and presentation modality (i.e., visual-
symbolic, textual, auditory etc.), to the road users, via messages to their
smart phones and/or electronic traffic signs.
## Social Media Interface Tool
This tool will be the interface for collecting real-time and on-scene
information about the disaster event from users that are close/at the event
site, as well as pushing/publishing eventrelated information to the users of
the media (in the vicinity or not). This will be an interface tool that will
be running at the RESIST platform and connect to social media tools such as
Tweeter, Facebook, Google+ and/or other accounts of users to collect site
specific information extracted from text messages from the users. For this,
the appropriate interfacing to social networking sites will be developed based
on social media monitoring tools. Social media monitoring includes measuring
opinions and sentiment of groups and influencers. The outputs of this task
(including awareness of people/users in the vicinity (or at risk) will be
towards the risk assessment as well as the mobility continuity modules
(internal to RESIST) but also towards public (close or not to the event)
through dissemination of messages from the RESIST platform.
#### Table 14: Social Media Interface Tool data description
<table>
<tr>
<th>
**Data Category**
</th>
<th>
**Description**
</th>
<th>
**Type**
</th> </tr>
<tr>
<td>
Timestamp
</td>
<td>
This is the time that the social media post was uploaded
</td>
<td>
Text ”Datetime:2018-1220T09:13:55+00:00”
</td> </tr>
<tr>
<td>
Location_gps
</td>
<td>
This is the GPS coordinates of the mobile device that uploaded the post
</td>
<td>
Point:(37.997013, 23.796357)
</td> </tr>
<tr>
<td>
Location_position
</td>
<td>
This is the location as identified in the submission of the post
</td>
<td>
String: “Athens, Greece”
</td> </tr>
<tr>
<td>
Message
</td>
<td>
This is the content of the social media post
</td>
<td>
String: ”This is a message”
</td> </tr>
<tr>
<td>
Emoji
</td>
<td>
This is the list of the emoji contained in the message
</td>
<td>
Array: [“emoji_1”,….]
</td> </tr> </table>
**The purpose of the data collection/generation and its relation to the
objectives of the project:**
The data collected by social media posts will be used in order to analyze the
public reactions towards the event. The collected text will be analyzed to
measure opinions and sentiment of the individuals close or not to the event
site. The tool will provide the measured characteristics internally and
externally to the RESIST platform.
**The types and formats of data that the project will generate/collect:**
Publicly available social media posts, text, ideograms and special characters,
will be the input of this tool. The output will be properly formatted
informative messages.
**The re-usability of any existing data:**
Publicly available social media posts related to similar events that have
already taken place, will be used to model the tool, identify patterns,
emotions and user behaviors.
**The origin of the data:**
Data sources are social media APIs.
**The expected size of the data:**
Data size will depend on the number of social media users and the event-
related messages. The expected size should be within the GB range.
**The data utility:**
The data will be utilized to measure opinions and sentiment of people affected
directly or indirectly by the event.
## Software for Safety, Security Risk Assessment and Management System
Risk Assessment and Management of Critical Highway Structures under Extreme
Events, It includes the identification of major hazards and hazard scenarios,
assessment of the criticality of structures at risk for each scenario and the
identification and assessment of preventive and response measures.
#### Table 15: Structural Vulnerability Assessment and Strengthening
Intervention Planning Tool data description
<table>
<tr>
<th>
**Data Category**
</th>
<th>
**Description**
</th>
<th>
**Type**
</th> </tr>
<tr>
<td>
Scenario
</td>
<td>
List of the use cases identified by the requirements outputs
</td>
<td>
String
</td> </tr>
<tr>
<td>
Probability
</td>
<td>
Probability of accurance for the scenario
</td>
<td>
double
</td> </tr> </table>
**The purpose of the data collection/generation and its relation to the
objectives of the project:**
The data collected will be used for identification and monetization of the
impact of successful threat scenarios that heave been modelled and to assess
the probability of occurrence of the above scenarios.
**The types and formats of data that the project will generate/collect:** All
the data are double in 32 bit format and Strings.
**The re-usability of any existing data:**
No re-use of existing data is anticipated.
**The origin of the data:**
Most of the data will be part of the output from WP3, WP4, WP5 and WP6.
Moreover, data from the Literature are going to be used.
**The expected size of the data:**
Data size will depend on the number of scenarios that are going to be defined.
The expected size should be within the MB range.
**The data utility:**
The data collected and generated by the module will enable the Infrastructure
manager all to identify vulnerabilities of the specific transport structures
under study to the specific threat scenarios.
# Making RESIST data FAIR
In 2016, the ‘ FAIR Guiding Principles for scientific data management and
stewardship’ [6] were published in _Scientific Data_ . The authors intended
to provide guidelines to improve the findability, accessibility,
interoperability, and reuse of digital assets. The principles emphasize
machine-actionability (i.e., the capacity of computational systems to find,
access, interoperate, and reuse data with none or minimal human intervention)
because humans increasingly rely on computational support to deal with data as
a result of the increase in volume, complexity, and creation speed of data.
The data that will be generated during and after the project must follow the
“FAIR” principles. Such requirement does not affect implementation choices and
does not necessarily suggest any specific technology, standard, or
implementation solution.
The FAIR principles were generated to improve the practices for data
management and datacuration, and FAIR aims to describe the principles in order
to be applied to a wide range of data management purposes, whether it is data
collection or data management of larger research projects regardless of
scientific disciplines.
With the endorsement of the FAIR principles by H2020 and their implementation
in the guidelines for H2020, the FAIR principles serve as a template for
lifecycle data management and ensure that the most important components for
lifecycle are covered.
This is intended as an implementation of the FAIR concept rather than a strict
technical implementation of the FAIR principles.
## Guidelines on FAIR data management in H2020
RESIST aims to follow the requirements stemming by the H2020 Guidelines on
FAIR Data Management [2]. Key aspects from said guidelines are presented
below.
Each project should provide information on:
* the handling of research data during and after the end of the project
* which data will be collected, processed and/or generated
* which methodology and standards will be applied
* whether data will be shared/made open access and
* How data will be curated and preserved after the project end. For this, a Data Summary must be given, presenting:
* the purpose of the data collection/generation and its relation to the objectives of the project
* the types and formats of data that the project will generate/collect
* the re-use of existing data, if any, and how they will be re-used the origin of the data
Then, the mechanisms to make data findable must be described. This entails the
possible use of metadata, Digital Object Identifiers, naming conventions and
search keywords.
The project must then specify which data will be made openly available or will
be licensed for re-use and how they will be made accessible, detailing the
repository, the access methods and any relevant documentation.
Data should be interoperable as far as possible. This may include using
standard formats, open software applications or standard data and metadata
vocabularies.
Provisions should be put in place for data security and recovery, if needed.
Informed consent needs to be discussed too.
Some of the above guidelines are overlapping with the GDPR requirements, for
example the informed consent procedure, the data description and the security
provisions.
## RESIST Naming conventions regarding deliverables and documents
During m6 consortium produced a unique project document (D1.1 Quality and risk
management) coding system for all documents and deliverables, as indicated in
the table below. As regards project deliverables naming and since they are
already coded in the Description of Work, their naming will be as follows:
<table>
<tr>
<th>
Project name
</th>
<th>
“RESIST”
</th> </tr>
<tr>
<td>
Underscore
</td>
<td>
“_”
</td> </tr>
<tr>
<td>
Next 3-4 digits following a pattern
</td>
<td>
“DX.X” + with X.X representing deliverable number according to Description of
Action (DoA).
</td> </tr>
<tr>
<td>
Underscore
</td>
<td>
“_”
</td> </tr>
<tr>
<td>
Deliverable title as in DoA (with blank spaces replaced by underscores)
</td>
<td>
E.g. “Data_Management_Plan”
</td> </tr>
<tr>
<td>
Underscore
</td>
<td>
“_”
</td> </tr>
<tr>
<td>
Deliverable version following a pattern (if needed)
</td>
<td>
"VX.X" with X.X representing the number of revision of the deliverable.
</td> </tr> </table>
Example:
“RESIST_D5.1_Architecture_of_the_networking_platform_supporting_secure_communicatio
ns_V3”, means the version 3 of D5.1 deliverable, titled _Architecture of the
networking platform supporting secure communications_ . Deliverables that have
been defined in the DoA as Public (they are provided in Appendix A) will be
provided in an open space of the project website (www.resistproject.eu) after
their being reviewed and approved by the EC, so that anyone may access them.
Apart from the title and their short description, they will include search
keywords in their title page. Regarding the deliverables that are confidential
and their content is restricted, an executive summary of the deliverable will
be replaced in the project website after the EC acceptance.
## Making data findable, including provisions for metadata
The RESIST consortium has a strong focus on making sure that the generated
data will be identifiable and easily discoverable thus rendering them
findable, including provisions for metadata.
An overview of the data types that will be available within the context of
RESIST is briefly described in the present deliverable and will be refined on
project’s progress according to the needs. The persistence layer of RESIST as
well as the REST API to store and retrieve the data from the persistence
layer, will be described in more details in the deliverables of WP8 regarding
the integration of the project. All the data (images and sensor data) will be
uniquely identified per operator ID, asset ID, inspection ID, and by the time
of acquisition. The following table, describes a preliminary plan regarding
the unique identification of the RESIST data in the RESIST persistence layer:
<table>
<tr>
<th>
**Type of Data**
</th>
<th>
**Identification Criteria**
</th> </tr>
<tr>
<td>
**Fixed sensor data**
</td>
<td>
Operator unique ID,
Asset ID,
Coordinates of the sensor,
Date&Time of the data acquisition
</td> </tr>
<tr>
<td>
**Collected images**
</td>
<td>
Operator unique ID,
Asset ID,
Inspection ID with Timestamp,
Image unique ID per inspection
</td> </tr>
<tr>
<td>
**Defects**
</td>
<td>
Operator unique ID,
Asset ID,
Inspection ID with Timestamp,
Unique ID per image,
Coordinates of the defect
</td> </tr>
<tr>
<td>
**Sensor data collected with RPAS (deflection, inclination, elasticity
modulus, rebars)**
</td>
<td>
Operator unique ID,
Asset ID,
Inspection ID with Timestamp,
Coordinates of the measurement.
</td> </tr> </table>
All the collected data in RESIST that will be stored in RESIST persistence
layer will have a solid reference for future use and will be stored in a
standardised format.
Data collected by the mobile application will also be stored in the
corresponding persistence layer of the associated remote RESIST modules of the
system. Only a minimal set of data associated to user profiling will be stored
in the persistence layer of the mobile device and definitely no names.
Regarding RESIST documents/deliverables, clear and harmonized naming
conventions presented previously will be used. All of the RESIST project
deliverables will be named following the nomenclature of section 4.1 of the
present deliverable and as described in deliverable D1.1 quality and risk
management plan.
## Making data openly accessible
The pseudonymous personal data of RESIST will be stored in dedicated
databases, hosted by relevant partners and accessible only to authorised
users. Any data that will be shared externally to the consortium will be
registered in the Registry of Research Data Repositories [3]. Such data will
be offered under Creative Commons License Attribution-Non-Commercial CC BY-NC.
Specific scripts will be offered for accessing these data and offering basic
statistics on the data. The source code of these access applications will be
offered under Apache v.2 license [4] and will be made available on a public
repository along with detailed documentation of usage.
Open datasets will be uploaded to an open access repository (e.g. Zenod o
_https://zenodo.org_ ) . Links to the open shared datasets will be available
in the RESIST Web site.
Access to data will be enabled through the use of Open APIs and interoperable
formats.
All project deliverables will be available to authorized consortium members
the project through the internal project management tool and document
repository, Redmine and all final submitted versions will also be available
through the partner area of the website.
The public project deliverables and the executive summaries of non-public
deliverables will be available in the RESIST website and will also be made
available through ResearchGate. In more detail, the deliverables that have
been defined in the Description of Action as Public will be provided with an
open space on the project website (www.resistproject.eu) after their review
and approval by the EC, so that anyone may access them. Regarding the
deliverables which are confidential and their content is restricted, an
executive summary of the deliverable will be available in the project website
after the EC approval. Moreover, RESIST will follow the Open Access practice
[5] of providing online access to its scientific research articles. This will
be presented in section 4.2.
## Making data interoperable
All data that will be externally offered within the project’s scope will be
available using dedicated scripts, as already mentioned. These scripts will
make use of the RESIST ICT platform API. The platform API will be open and
thoroughly documented in order to enable and encourage its usage from every
third-party application without forcing any dependencies on the provided
scripts. It will also use established standards as much as possible.
The project tools will be based on open source software to facilitate their
adoption and possible modifications. Interoperability of data will be enabled
with standardized data. Data regarding mobility and continuity will be
modelled according to industry practices and standards such as the DATEX2.0
standard.
The data in the databases will be exposed in a text format following well-
known and established standards (e.g., CSV, JSON or XML).
Currently there is not a standard for exchanging social media data. This will
be closely monitored and the tool will adapt to any standard accepted by the
scientific community when available.
All RESIST security mechanisms will rely on information security industry
standard format and representations (e.g. for malicious traffic signatures or
incident reporting), to allow for interoperability with associated mechanisms
and tools widely available in the field. For example for cybersecurity related
data, such formats will include the MISP open standards (Open Source Threat
Intelligence Platform & Open Standards For Threat Information Sharing;
_https://www.misp-project.org/_ ) , as well as the Snort rules format (
_https://www.snort.org/rules_explanation_ ) which is used in the tools itself
and its numerous derivatives.
Increase data re-use (through clarifying licenses)
As mentioned, the data that will be generated by RESIST will be licensed for
scientific purposes under Creative Commons License Attribution-Non-Commercial
CC BY-NC after the completion of the project. Other than the ones imposed by
this license, no other restrictions for re-usage by third parties are
envisaged.
Any new filters and signatures that will be generated in the context of the
project, e.g. for traffic classification or malware identification, will be
represented in industry standard formats and will be shared with the licensing
terms pertaining to each of the corresponding tools (e.g. Snort rules;
_https://www.snort.org/rules_explanation_ ) . The same process will be
followed for threat intelligence insights gathered through the use of the
RESIST platform in the project’s critical domain, via the MISP open standards,
as mentioned above.
Data obtained by RESIST modules will be either re-used in order to further
enhance the functionality of the existing system or will be used in order to
perform statistical analysis of associated collected data types. Data re-used
will be enabled though data anonymisation and use of harmonised data
standards.
Data collected by the mobile application will be uploaded to associated remote
modules of the RESIST platform. Therefore any potential re-use of data will
follow the data re-use strategy of the corresponding RESIST module.
Data collected by the social media tool will be only used to further improve
the quality of the produced output. Data related to social media posts will
not be made available to anyone and will be used exclusively by the tool. The
outputs of the tool analyses will be available to the RESIST platform to be
reused as appropriate.
In principle the data that will be collected are expected to be available for
five years after the project’s end.
# Allocation of resources
At this stage of the project, there are no resources specifically allocated
for making RESIST data FAIR. In general terms, it is highly probable that no
extra-costs will be incurred. Regarding equipment and maintenance costs, these
will be, for the lifetime of the project, covered by the consortium.
Regarding personnel costs, we expect that making data FAIR shall not demand
any extra calculable time of effort. The personnel hours dedicated to this
will be counted within the Person Month dedications to the respective tasks.
In order to proactively mitigate the risk of partners overspending due to the
efforts of making any RESIST data FAIR, WP leaders should be subjected to
continuous monitoring by the Data Manager and of course the Project
Coordinator through the periodic management reports, the physical meetings,
etc.
The Consortium will take advantage of the fact that costs associated with open
access to research data can be claimed as eligible costs of any Horizon 2020
grant, and if necessary, reallocation of resources between partners will take
place.
Moreover, it should be noted that 3 person months that have been allocated to
Task 1.4: “Data Management and Internal Project Server” which concerns mostly
data related matters.
More information regarding the resources for internal data and for data to be
shared with external entities are provided in the next sections 5.1 and 5.2
respectively.
## Resources for internal data
The internal data to be exchanged among RESIST partners are related to
deliverables, internal reports, minutes of meetings and teleconferences,
agendas, templates, presentations, administrative documents and files of any
kind in general. RESIST will make use of Redmine, a free and open source, web-
based project management and issue tracking collaborative tool in order to
facilitate information exchange, storage, ordering and retrieval as needed in
the project. Redmine will be used as the common document repository of the
project and is already configured by the website administrator. Redmine is set
and runs under the responsibility of ICCS, the project coordinator. All
consortium members are actively using this interactive knowledge-sharing
platform for information exchange, discussions, news and calendar. This
collaborative tool and relevant storage structure is consortium confidential
with restricted access.
The key features supported by Redmine are the following:
* The platform increases the efficiency of Redmine partners by providing a digest of upcoming, ongoing and overdue work.
* Project plans can be created and project work stream items can be connected to activities and milestones by using integrated Kanban boards and Gantt charts.
* Work Package leaders and Project Managers may view, manage and execute on a high-level roadmap plan, covering their long-team work strategy.
* The platform can provide Team Leads and Managers (Project Coordinator, Work Package leaders, Technical and Quality Assurance Manager, Data Manager, Ethics Manager) with a complete picture of task assignments and commitments across the RESIST project.
* RESIST partners can share files and collaborate on project documents.
* RESIST progress, goals and plans can be visualised in order to be able to track how close each team is to meeting deadlines and identify where bottlenecks may be occurring.
* RESIST partners can easily and instantly share feedback, ideas and questions.
Redmine is free of cost, so, the only resources declared concern only the
setting and deployment of the collaborative platform. In addition, resources
relevant to administrative and maintenance issues with respect to Redmine are
covered by the project coordinator, ICCS, who also has the rights for managing
users, groups and also of assigning roles and permissions. To this end, ICCS
has already provided RESIST partners with credentials, after their request for
an account, in order for them to be able to login and use it. Further to this,
for the needs of continuously upgrading the collaborative tool and back-up its
database in order to address unforeseen events (hard disks failures, web
attacks, etc.) resources have already been allocated by the consortium.
## Resources for data to be shared with external entities
As previously stated, RESIST participates in the Horizon 2020 Open Research
Data Pilot. The project aims to make the research data to be generated
findable, accessible, interoperable, reusable and secured.
Technical validation, and evaluation as well as benchmarking and system
assessment will be produced following the outcomes and key performance
indications of the two pilot activities in Greece and Italy.
D2.4 “KPIs; Evaluation methodology; Techno-Economic, Env., Social analyses”,
defines success criteria and the KPIs to be used. It also defines a framework
for technoeconomic/environmental/social analyses and for the assessment of the
effects of the proposed solutions on regional economy and the environment.
D8.1 “Integration Plan” provides information about the integration cycles,
establish the tasks at each step and act as the specification of the whole
integration, ensuring there are no conflicting requirements of the individual
components both in hardware and in software / middleware. D8.4 “Technical
validation and evaluation report –User’s guide” will include the outcomes of
the technical testing of the integrated platform such as: description of the
hardware, modules and their versions, integrated devices, communication
channels and protocols; description of the integration test cases with
reference to their source code; log of execution of the integration test
cases; register of raised issues with their status; to be used during the
field tests. D9.1
“Validation, demonstration and benchmarking preparatory report” will include
all preparatory steps for each pilot, clarification of access, equipment and
other steps for the smooth execution of the pilots. This will also include the
final validation, demonstration and benchmarking scenarios and metrics.
Finally, D9.2 “Final system assessment and benchmarking” will include the
results of all activities in both sites and the set of recommendations and
benchmarking results. The assessment will include as Annexes the technical
performance and usability of all individual solutions and the Integrated
Platform. In addition, techno-economic analyses, the environmental and social
assessment will also be presented.
However, certain resources are needed in order for the data to be shared with
external entities.
Any need of personnel hours’ allocation for data stewardship is already part
of the allocated person months per Work Package, as Work Package leaders are
the main responsible for such activities, with the support of all other
partners involved in the certain Work Package and in particular with the
overall guidance and support of the project’s consortium that has budgeted for
dissemination costs. The RESIST project will follow the Open Access practice
[5] of providing online access to scientific information that is free of
charge to the end-user and reusable.
This covers both peer-reviewed scientific research articles (published in
scholarly journals) and research data (data underlying publications, curated
data and/or raw data).
In the cases where publication in a scientific journal shall be selected as a
means of dissemination of the project activities, one of the two channels of
open access shall be chosen (evaluated per case), i.e. either:
* Self-archiving / 'green' open access – the author, or a representative, archives (deposits) the published article or the final peer-reviewed manuscript in an online (either institutional, or centralized such as Zenodo) repository before, at the same time as, or after publication.
* Open access publishing / 'gold' open access - an article is immediately published in open access mode. In this model, the payment of publication costs is shifted away from subscribing readers. The most common business model is based on one-off payments by authors/partners.
RESIST partners will also consider the possibility to make use of free of
charge, open data repository services, such as Zenodo. Through such
repositories, metadata and digital object identifiers could be assigned to
RESIST datasets in order for them to be located via search after the end of
the project. The datasets in Zenodo will be preserved in line with the
European Commission Data Deposit Policy. The data will be maintained
indefinitely (minimum 5 years) ensuring no costs for archiving. It should be
noted that any unforeseen costs related to open access to research data in
Horizon 2020 are eligible for reimbursement for the duration of the project.
It is also worth mentioning that the costs for making data FAIR highly depends
on the yet to be specified details of the amount data that will be collected
and their processing effort. In any case, all costs related to FAIR data
management that will occur during project implementation will be covered by
the project’s budget. However, any other cost that may relate to long term
data preservation will be discussed among consortium members, but as stated
before the services of free of charge research data repositories will be
pursued.
# Data security
The goal of the data security is to maintain the confidentiality, integrity,
and availability of information resources in order to enable successful
research and business operations. All RESIST data will be collected, stored,
protected, shared, retained and destroyed, upholding state-of-the-art security
measures and in full compliance relevant EU legislation (see also Section 8).
As a general rule, these data will be stored on paper and/or computer files
protected by state-of-the-art physical and logical security measures: the
archives containing the paper folders will be locked in a non-reachable and
hard to access place.
On the infrastructure level, the following considerations will be met:
1. The data repository will allow physical access to the premises only to authorized staff
2. The servers should be complying with appropriate security standards (e.g. prohibiting remote access to unauthorised users) and be accessible only by authorised personnel with the proper use of credentials.
3. State-of-the-art security mechanisms should be deployed in the servers (i.e. host intrusion detection system) or in an important network vantage point (i.e. network based intrusion detection system) to monitor the systems and the traffic flow for known threats
4. Access to the data repository should be accomplished over HTTPS and any data transfer can be done via SFTPs. Of great importance is also the security of the data stored, meaning that strong cryptographic mechanisms should be employed to make the data incomprehensible for an attacker. As well as the availability of data, that could be handled by keeping regular backups, which in turn are stored in the similar secure manner.
From an endpoint perspective, input validation and X-Scripting protection
techniques should be present; alongside mechanisms to protect session
hijacking by a malicious third party.
This chapter focuses on the technical guidelines specific to mitigating the
occurrence of common data vulnerabilities.
## Data Acquisition
RESIST receives data primarily through various types of sensors, UAVs, Social
Media and users’ mobile phones, as detailed in Section 3\.
Data are acquired in conformity of security best practices through protected
connections and dedicated state-of-the-art IT infrastructures (LANs, protected
servers, firewall).
To guarantee security and reliability of the communication, the most advanced
web standards are implemented:
* HTTPS transport level security is applied.
* Authentication and authorization are required both for user interactions with the system and for machine-to-machine transactions.
* Access and interactions are controlled and audited.
Mobility data will be mostly retrieved by the mobility continuity module over
a client-server architecture, secured according to industry standard
techniques. Data will be encrypted through the use of the HTTPS protocol.
Moreover authentication of users to the system will be implemented through the
use of OAuth2.0 and open-id connect protocols.
Mobile data will be collected through local sensors and user input. Mobile
device sensor data access is restricted to the respective api’s which are used
only upon user consent. Moreover such data will only be tunnelled to the
associated remote RESIST modules are described in previous paragraphs. User
input will be collected through the use of mobile data input api’s and will be
directly tunnelled to remote RESIST modules as described in previous
paragraphs.
Publicly available social media posts will be collected by the social media
interface tool through the APIs available from the social media platforms.
Data acquisition will also be performed by respective cybersecurity monitor
and functions. Details on such components will be decided/defined within WP5,
Task 5.3: Cyber Security Mechanisms.
Further elaboration of aggregated data will be performed in the scope of WP6
for extracting global statistics for project impacts evaluation and overall
business and mobility-related indicators.
## Data Processing
In the context of the project, data are mainly processed on the Mobility
Continuity Module and the Cognitive Computer Vision System.
Further elaboration of aggregated data will be performed in the scope of WP6
for extracting global statistics for project impacts evaluation and overall
business and mobility-related indicators.
In all situations that include data processing, usually, the computations take
place in regularly updated software environments in terms of operating
systems, runtime environments and software libraries.
Corporate-level security standards are guaranteed across every stage of data
elaboration, while restrictions and data protection techniques ensure security
and reliability. Secure computation techniques will be adopted to perform the
required tasks, identifying and loading the least amount of data to be
processed. Intermediate results of elaborations are cleaned as soon as they
are no more subject of further activity. Data processing may include
anonymization or pseudonymization techniques if required in order to be in
compliance with GDPR principles. Moreover, any data formatting and/or data
format transformation may be incorporated as well as metadata information
which will be added to the dataset so as to make them findable.
Processing of data will be performed as well within the cloud platform of
ICCS. The platform, ensures both digital and physical access to the server.
Moreover exposed data interfaces will follow the data security methodology
presented in the previous paragraph. Minimal processing related to data
transmission and data visualisation will be executed within the mobile
application. The processing will ensure data integrity and will not expose
data through any non-authorised interfaces.
## Data Sharing
In RESIST data sharing occurs at different levels and mostly among project
partners:
Regarding the project activities, in principle, information is flowing among
partners for internal consortium consumption in the tasks or can be subject to
targeted dissemination activities with the condition that they are classified
as public.
During the pilots’ execution, data are not expected to be released to any
external parties. If needed though, shared data detail level will be the
minimum required.
With reference to the project’s internal data and documents, sharing security
is built on the tools and the platforms the consortium adopts:
The Redmine collaborative tool, is selected as the repository for internal
collaborative documents, working drafts, minutes of meetings and phone calls
targeted to private access for the consortium members. Datasets that will be
shared within the consortium are also expected to be stored in the Redmine
project repository. Regular backup strategy and data recovery procedures will
be followed.
Any project data that will be needed to be publicly accessible, will be,
through public open data and open access platforms such as Zenodo side by side
with the project’s website.
Redmine as previously described is a free and open source, web-based project
management and issue tracking tool. It is also featured as a centralized
password-protected repository and workspace, built over secure network
connection. It allows users to manage multiple projects and associated
subprojects. It features per project wikis and forums, time tracking, and
flexible, role-based access control. It includes a calendar and Gantt charts
to aid visual representation of projects and their deadlines. Redmine
integrates with various version control systems and includes a repository
browser and diff viewer.
Zenodo is an open source open data platform, which can be used to serve our
needs, that provides a repository for European Commission funded research
built on the OpenAIRE project and CERN developed tools. Clear data access,
sharing and reuse policies are enforced.
Any data applicable for sharing, will be shared
* according to state of the art and industry security practices
* authorized network and UI interfaces following industry standard practices for security
## Data Storage and Preservation
In this section we will present the storage and preservation aspects of data
management.
Project working documents are stored in the Redmine collaborative tool.
The platform is set under the responsibility of the project coordinator (ICCS)
deployed on the organization’s data center who guarantees environmental
protection. Project data are persisted in a modern and reliable database
system. Access to the database is restricted to authenticated users and
different authorization profiles are defined according to business-related
data access restrictions. Connections for system and data administration are
protected and audited.
A backup and recovery strategy for the system has been planned according to
common best practices such as daily full backups and more frequent incremental
backups.
After the end of the project, open data will be archived and preserved in
partners’ repositories as well as in Redmine platform and the project’s
website for five years. Regarding confidential data that will be deposited to
the Redmine repository, the decisions about long-term preservation and
curation have not been taken yet. RESIST will also ensure the preservation of
the necessary data anonymity. Proper anonymization techniques will be applied
to guarantee the preservation of anonymity of the user personal, sensitive
data (e.g. emails and contact info of the users of the platform). Any user
personal data obtained within the project will be transmitted to partners
within the consortium only after anonymization or pseudonymization techniques
will be applied.
Zenodo offers data storage in CERN Data Centers, primarily in Geneva, with
replicas in Budapest. Data files and metadata are replicated in a distributed
file system and backed up nightly. Data will be retained for the lifetime of
the repository, which nowadays is defined for the next 20 years at least.
A secure persistence layer will be hosted within the cloud interface of ICCS.
Minimal data collections will be stored within the mobile device, within the
mobile device’s secure persistence layer. The majority of data collected will
be transmitted to associated remote RESIST modules and will be stored
according to the practices described in the related sections of the document.
# Ethical aspects
In RESIST, there are three dedicated separate deliverables D12.1 H requirement
No.1, D12.2 POPD Requirement No. 2 and D12.3 EPQ Requirement No.3 which cover
ethical aspects regarding the project and the corresponding actions with
regards to its tasks.
Those deliverables address on the procedures and criteria that will be used to
identify/recruit research participants. In depth description of the informed
consent procedures that will be implemented for the participation of humans
has been provided and the relevant templates of the informed consent/assent
forms and information sheets have been produced.
The project information sheet accompanied by the informed consent/assent forms
are included in deliverable D12.1 H - Requirement No. 1 and in deliverable
D12.2 POPD - Requirement No. 2. The informed consent forms are intended to be
signed by participating prior to being engaged in any participatory to the
project method after having read and fully understand the information provided
to the project’s information sheet and, of course, any additional information
provided on the informed consent form. These forms will also be offered to all
volunteers prior to their participation in the large scale demonstration
pilots that will be held.
The aforementioned deliverables also cover matters on procedures for data
collection, storage, protection, retention, and destruction, and confirmation
that they comply with national and EU legislation and detailed information on
the informed consent procedures in regard to the collection, storage, and
protection of personal in compliance with the Data Protection Directive EC
Directive 95/46, the national legislation and the EU General Data Protection
Regulation (GDPR). Extensive investigation has taken place on whether or not
any certain compliance and/or authorization is required under national laws of
Italy and Greece for the collection and processing of personal data.
Finally, other ethical aspects investigated had to do with possible harm to
the environment caused by the research and the proposed measures that will be
taken to mitigate the risks and on relevant authorizations needed for the
proposed use of unmanned aerial vehicles in the project.
# Data Protection
In this chapter we will try to address all the relevant data protection
aspects from different perspectives. This chapter has a supplementary role to
Deliverable “D12.2 POPD Requirement No. 2” which is dedicated on protection of
personal data. First of all, we will outline the European framework on privacy
and security and provide the general principles. Following comes a brief
description of the data subject’s rights and the consent that must be provided
for the processing of data. The controller’s and processor’s obligations are
also listed in accordance with the new regulations. On the latter come the
cybersecurity certification, privacy related matters, measures that RESIST
will take to mitigate any possible risks and finally matters regarding the
user recruitment will be discussed.
## GDPR Requirements
### General principles
The Regulation (EU) 2016/679 of the European Parliament and of the Council of
27 April 2016 on the protection of natural persons with regard to the
processing of personal data and on the free movement of such data (General
Data Protection Regulation - GDPR) [1] comes into force on 25 May 2018. the
new regulation has been heralded as the most innovative and important change
in data privacy regulation in 20 years and as an essential step to strengthen
citizen’s fundamental rights in the digital age and facilitate business by
simplifying rules for companies in the Digital Single Market.
The aim of the GDPR is to protect all EU citizens from privacy and data
breaches in an increasingly data-driven world that is vastly different from
the time in which the former 1995 directive (Data Protection Directive
95/46/EC) was established. An overview of the main changes is provided below:
**Increased territorial scope (extra territorial applicability)**
The GDPR’s section on territorial scope can be found in Article 3 of the
regulation. Here it’s made clear that the regulation applies wherever
processing of data takes place, be it within or outside the European Union.
The Article applies:
* "to the processing of personal data in the context of the activities of an establishment of a controller or a processor in the Union, whether or not the processing takes place in the Union;
* to the processing of personal data of data subjects who are in the Union by a controller or processor not established in the Union where the processing relates to the offering of goods or services (whether free or paid for) or the monitoring of behavior which takes place within the EU; and
* to the processing of any personal data by a controller outside the EU but in a jurisdiction where Member State law applies by virtue of international law (e.g. a diplomatic mission or consular post)."
It is not envisaged that there will be any need to transfer data for the
RESIST project outside the EU. Nevertheless, in the case of dissemination and
exploitation activities involving nonEU states this may become relevant.
**Penalties**
Organizations found to be in breach of GDPR can be fined up to 4% of annual
global turnover or €20 Million (whichever is greater). This is the maximum
fine that can be imposed for the most serious infringements, e.g. not having
sufficient customer consent to process data or violating the core of Privacy
by Design concepts.
**Consent**
All the relevant conditions regarding consent have been strengthened, and
companies will no longer be able to use long illegible terms and conditions
full of legalese. Key principles are as follows:
* The request for consent must be given in an intelligible and easily accessible form.
* This should clearly declare the purpose of data processing attached to that consent. And importantly, it must be as easy to withdraw consent as it is to give it.
Below are some useful definitions from the GDPR:
**Personal data** means any information relating to an identified or
identifiable natural person (‘data subject’); an identifiable natural person
is one who can be identified, directly or indirectly, in particular by
reference to an identifier such as a name, an identification number, location
data, an online identifier or to one or more factors specific to the physical,
physiological, genetic, mental, economic, cultural or social identity of that
natural person
**Processing** means any operation or set of operations which is performed on
personal data or on sets of personal data, whether or not by automated means,
such as collection, recording, organisation, structuring, storage, adaptation
or alteration, retrieval, consultation, use, disclosure by transmission,
dissemination or otherwise making available, alignment or combination,
restriction, erasure or destruction
**Controller** means the natural or legal person, public authority, agency or
other body which, alone or jointly with others, determines the purposes and
means of the processing of personal data
**Processor** means a natural or legal person, public authority, agency or
other body which processes personal data on behalf of the controller;
The general principles pursued by the GDPR requirements are that personal data
shall be
* processed **lawfully** , fairly and in a **transparent** manner in relation to individuals,
* collected for specified, explicit and legitimate purposes and further processed for scientific purposes,
* adequate, **relevant and limited** to what is necessary for the purposes for which they are processed,
* kept in a form which permits **identification of data subjects for no longer than is necessary** for the purposes for which the personal data are processed,
* processed in a manner that ensures appropriate **security** of the personal data, including protection against unauthorised or unlawful processing and against accidental loss, destruction or damage.
The GDPR sets specific requirements for processing of special categories of
personal data (for example revealing race, health, politics, religion), but
such data will not be collected or processed in RESIST.
### Data subject’s rights
**Data subject (citizen) rights** brought in to affect by the GDPR are as
follows:
**Right to be informed**
Fair processing information should be provided to the subjects. The
information should be concise, transparent, intelligible and easily
accessible. It should be written in clear and plain language and it should be
given free of charge.
**Breach notification**
Under the GDPR, breach notification will become mandatory where a data breach
is likely to “result in a risk for the rights and freedoms of individuals”.
This must be done within 72 hours of first having become aware of the breach.
**Right to Access**
Part of the expanded rights to data subjects outlined by the GDPR is the right
for data subjects to obtain from the data controller confirmation as to
whether personal data concerning them is being processed, where and for what
purpose. Further, the controller shall provide a copy of the personal data,
free of charge, in an electronic format. This is a dramatic change to data
transparency and empowerment of citizens/subjects.
**Right to rectification**
Individuals are entitled to have their personal data rectified if it is
inaccurate or incomplete. If requested, a response should be given within one
month from the request.
**Right to be forgotten**
Also known as **Data Erasure** , the right to be forgotten entitles the data
subject to have the controller erase his/her personal data, cease further
dissemination of the data, and potentially have third parties halt processing
of the data. The conditions for erasure, as outlined in article 17, include
the data no longer being relevant to original purposes for processing, or a
data subjects withdrawing consent. It should also be noted that this right
requires controllers to compare the subject’s rights to “the public interest
in the availability of the data” when considering such requests.
**Right to restrict processing**
Individuals have the right to block processing of personal data under certain
conditions, in which case data can be stored but not processed.
**Right to data portability**
This right allows individuals to obtain and reuse their personal data for
their own purposes across different services. Therefore, personal data should
be easily copied, moved, transferred from one IT environment to another in a
safe and secure way and free of charge. This means that, if requested by the
subject, personal data should be provided in a structured, commonly used and
machine-readable form (for example as a csv file).
**Right to object**
Individuals have the right to object to processing. Processing must stop as
soon as an objection is received. Individuals must be informed of their right
to object “at the point of first communication”. This right should be
presented clearly and separately from other information.
**Rights related to automated decision making including profiling**
Profiling includes algorithms to analyse or predict behaviour, location or
movements. In such cases:
* meaningful information about the logic involved in profiling should be provided together with the significance and consequences
* appropriate mathematical or statistical procedures should be involved for the profiling
* measures to enable the correction of inaccuracies and to minimise risk of errors should be implemented
Automated decision making must not concern children and must not be based on
processing special categories of data.
## Cybersecurity certification
On 13 September 2017 the Commission issued a proposal for a regulation on
ENISA, the "EU Cybersecurity Agency", and on Information and Communication
Technology cybersecurity certification (''Cybersecurity Act'').
Certification plays a critical role in increasing trust and security in
products and services that are crucial for the digital single market. At the
moment, a number of different security certification schemes for ICT products
exist in the EU. Without a common framework for EUwide valid cybersecurity
certificates, there is an increasing risk of fragmentation and barriers in the
single market.
The proposed certification framework will provide EU-wide certification
schemes as a comprehensive set of rules, technical requirements, standards and
procedures. This will be based on agreement at EU level for the evaluation of
the security properties of a specific ICTbased product or service e.g. smart
cards.
The certification will attest that ICT products and services that have been
certified in accordance with such a scheme comply with specified cybersecurity
requirements. The resulting certificate will be recognized in all Member
States, making it easier for businesses to trade across borders and for
purchasers to understand the security features of the product or service.
The schemes proposed in the future European framework will rely as much as
possible on international standards as a way to avoid creating trade barriers
and ensuring coherence with international initiatives.
In the context of RESIST, there is a dedicated work package on cyber security
and messaging/communications which among others will implement certain
mechanisms to ensure it, and the proposed EU certification framework must be
taken into consideration. Detalis on the RESIST cyber security mechanisms can
be found in deliverable D 5.2 Resist cybersecurity mechanisms.
## Privacy information
The RESIST project will make use of users’ data and information for research
purposes only. When applicable, aspects of the project relating to the
handling of end user data will have to take the following ethical and privacy
factors into consideration:
* Protection of right to honor, personal and family privacy and image.
* Personal data protection.
* Secrecy of private communications. Protection of consumers and users.
The key aspects that can be summarized from EU rules on data protection as
depicted previously with regard to the European framework on privacy and
security are:
* Establishing the principle of data quality, so that personal data must be adequate, relevant and not excessive, according to the purpose for which it will be processed.
* The existence of prior consent with respect to the collection of data. Establishing basic principles of citizens' rights to access, rectification, cancellation and opposition in relation to their personal data.
* Establishing a basic principle of ensuring confidentiality and the obligation to implement appropriate security measures to ensure that access to information is limited and controlled.
* Establishing the basic principles of Data Protection that National Authorities should follow.
* Laying down the foundations of international transfers of personal data.
## Measures to be taken
To be compliant with the regulations set by the EC, personal data will be
collected on a strictly need-to-know basis, solely for the purposes of the
RESIST project and will be destroyed when no longer needed for that purpose.
Technical and operational measures will be implemented to ensure that users
will be able to access, rectify, cancel and oppose the processing and storage
of their personal data.
Anonymized data with personal identifiers completely removed will be used. For
limited specific purposes, where multiple data samples need to be linked to
the same individual, pseudonymization by irreversible cryptographic pseudonyms
will be employed. The linkage information required for re-identification will
be stored in a separate database utilizing serverside encryption and tightly
controlled access. De-identification of data must be performed the soonest
possible to the time at which the data are collected. Specifically, in the
case of data gathered from the personal mobile apps, data de-identification
should occur directly at the mobile phone use level. Based on an in-depth risk
assessment performed at the initial stage of the project, additional
horizontal and/or vertical data partitioning may be used to further decompose
data into smaller, independently controlled parts.
Encrypted communication based on SSL/TLS/SSH will be used whenever any data or
information, especially personal is transferred between systems. As a general
rule, collected sensitive data will be stored as close to the point of origin
as possible. The data will be released for further processing by other
components of the RESIST architecture on a strict need-toknow basis – only
required information will be released and with the minimum amount of detail
and specificity required for a given processing task. Whenever aggregated data
are sufficient, individual records, even made anonymous, will not be used.
Secure computation techniques will be used to derive required processing
results without having to collate all information in a single location.
Collected personal data are not expected to be transferred outside the EU.
Access to all data will be restricted using appropriate authentication and
authorization techniques (passwords, private keys etc.). Only minimum access
privileges required to fulfil their roles will be granted to individuals and
systems. Access to personal information will be monitored and logged. It will
be ensured that all individuals working with personal data within the project
will be aware of their responsibilities and obligations with respect to data
protection.
The data collected in the context of the RESIST project and used beyond the
scope of the project, if any, will have to be declared to the National
Authorities/Agencies responsible for Data Protection. Every entity involved in
handling data in some way has to do this at national level, not only the
entities collecting data, but also the ones in charge of storing or processing
them. Following, the appropriate Data Protection Certificates and Approvals
(DPCs) must be provided.
Informed consent procedures must be in position to ensure that participants
are informed about what personal data is tracked and what observations are
made, why this personal data is requested/obtained, what the personal data
will be used for, how their personal data will be handled, what will be
involved in taking part, and any potential risk they incur by participating in
the user studies, trials or using the application. Additionally, participants
must be informed that their participation is completely voluntary and that
they will be able to withdraw from the studies at any time without penalty and
for any reason. Regarding data coming directly from mobile devices, explicit
user’s consent will be obtained before the related application is launched.
# Protection of personal data in RESIST
Among the deliverables that have been foreseen in the DoA of the RESIST
project there are also D12.1 H - Requirement No. 1 and D12.2 POPD \-
Requirement No. 2. These deliverables are part of work package 12, a dedicated
to Ethics work package, and cover the following contractual issues:
**D12.1:**
* The procedures and criteria that will be used to identify/recruit research participants.
* The informed consent procedures that will be implemented for the participation of humans.
* Templates of the informed consent/assent forms and information sheets (in language and terms intelligible to the participants).
**D12.2:**
* The applicant must check if a declaration on compliance and/or authorisation is required under national law for collecting and processing personal data as described in the proposal. If yes, the declaration on compliance and/or authorisation must be kept on file.
* If no declaration on compliance or authorisation is required under the applicable national law, a statement from the designated Data Protection Officer that all personal data collection and processing will be carried out according to EU and national legislation must be submitted.
* Detailed information on the procedures for data collection, storage, protection, retention, and destruction, and confirmation that they comply with national and EU legislation must be included in the Data Management Plan.
* Detailed information on the informed consent procedures in regard to the collection, storage, and protection of personal data must be submitted.
On the current deliverable, the data management plan, we will offer a summary
of D12.2 ethical matters and we well address briefly on the protection of
personal data in the scope of the project to avoid replicating information.
## General principles
Within RESIST, the purposes and means of the processing of the personal data
that will be collected and stored in databases that are jointly determined by
certain partners, who will process the personal data for the purposes of
scientific analysis. Special category data are not foreseen to be collected or
processed.
For all personal data reported in the project, the names of the participants
will be replaced with ID codes to maintain anonymity. The identity of all
participants will be fully masked in any printed materials, project reports or
dissemination materials unless specific permission is provided. The RESIST
Project Coordinator (ICCS) will maintain all records and study documents.
These will be retained for at least 5 years or for longer if required by
individual institutions. If the responsible investigator is no longer able to
maintain the study records, a second person will be nominated to take over
this responsibility. The study documents shall be archived in the secure
facilities of ICCS (the project coordinating partner).
In cases that audio and/or video recording devices used within our studies,
these will be the responsibility of individual members of the research team;
no one outside of the research team will have access to any of these data.
Personal media and other content will not be used in wider dissemination of
the research project unless consent is specifically provided. Once
transcribed, audio files and other media will be deleted from digital
recorders and stored digitally within a password-protected folder on the
network drives of the participating institutions. Other content will also be
stored in password- protected databases encrypted and protected, within
institutions according to internal procedures, available only to members of
the research team.
Each partner who provides or otherwise make available to any other partner
data, will represent that:
1. it has the authority to disclose such data which it provides to the partners;
2. where legally required and relevant, it has obtained appropriate informed consents from all the individuals involved, or from any other applicable institution, all in compliance with applicable regulations; and
3. there is no restriction in place that would prevent any such other partner from using such data for the purpose of the project.
## Getting data subject’s consent
As described on D12.1 and D12.2 any relevant participant to the project will
be asked to consent to the processing of their personal data prior to their
participation and after they are given an appropriate timeframe, to be
informed about the project’s information.
The potentially involved individuals will be offered a printed on paper
Consent Form and they will be asked to complete it, indicating their consent
or not. A template for the Consent Form is provided in Appendix B but it will
be modified according to the actual project developments.
They will also be offered the relevant project’s Information sheet, also
provided in Appendix A, where subjects will be able to get general information
about project, their involvement, matters regarding the processing of their
personal data, and will be determined to withdraw consent at any time or
exercise their rights if they feel some kind of violation.
The signed consent forms will be maintained locally by each partner and a copy
will be given to the data subject.
The RESIST Data Protection Officer (DPO) will review the consent forms on a
monthly basis and will consult with the RESIST partners, to check that the
processing and purposes have not changed from what has been communicated to
subjects.
## Safeguarding data subject’s rights
RESIST respects the subjects’ right to be informed via its information sheet
and consent form, which will be given to them prior to any involvement, and
which will be available at a dedicated area in its website as well.
On the dedicated area in the web site, there will be an explanatory text,
allowing the subjects to contact the Data Protection Officer (DPO), request
access to their personal data or request rectification or erasure or
restriction of processing of their personal data or object to the processing
of their personal data. If such a request is received, the DPO will inform the
project Steering Committee, so that appropriate actions are taken. It will be
possible for a subject to request a copy of his/her personal data. If such a
request is received, the DPO will inform the project Steering Committee and
the project will provide the personal data in a structured, commonly used and
machine-readable csv file for free.
## Ensuring data security
All data, including personal data in RESIST will be securely processed.
Hard copies of consent forms and possible identity data needed by local
regulations (including names, telephones, emails, addresses, ID, driving
licence, etc) will be kept in locked drawers. Any electronic copies will be
stored locally, encrypted and protected according to internal procedures by
the responsible partner. Only authorised employees will have access to such
data. Such data will be handled according to internal procedures of each
partner and will not be transferred to other RESIST partners.
RESIST project deliverables will be available in the project Redmine
_https://redmine.iccs.gr/projects/resist_ . The server hosting the Redmine
installation is located in the ICCS premises in Athens, Greece, in a secured
rack in the ICCS’s servers’ room. The server databases are backed up on a
daily basis, while its files are backed up every second day. The server is
built with multiple redundancies, network- and disk-wise, in order to ensure
its constant operation and network access. The web access to
https://redmine.iccs.gr/ is secured using a digital certificate from TERENA
(https://www.terena.org). Only users authorised by the administrator, ICCS,
have access to the material available in the Redmine. The data in the
dedicated RESIST databases will be maintained for 5 years after the end of the
project. More details on the data retention and the hosting platform are
included on deliverable “D1.2 Internal Project Server”.
## Accountability and governance
The present document describes the measures to be implemented, so that RESIST
complies with the GDPR requirements. This deliverable will be reviewed, and
updated as needed, by the Data Protection Officer.
RESIST implements principles of data protection by design and data protection
by default. A data minimisation policy is foreseen to be adopted by RESIST,
which means that only data strictly necessary for the scientific analyses will
be stored in the long term. The subject’s identity will not be stored in it
(pseudonymization) and the processing will be transparent to subjects since
they will be informed in a clear and easy understandable language.
# Conclusions
This deliverable is the first updated version of the Data Management Plan of
the RESIST project given that the project consortium has opted in for the Open
Research Data Pilot principles. It replicates the initial deliverable and
complements it with the changes required following the project’s progress. In
general, it aims to provide a general overview of the kind of data that
produced, reused, collected and processed within the project’s context. It
tackles also topics related to challenges and constraints that need to be
taken into consideration. It also addresses on the data types with regard to
their purpose and relation to the project objectives, as well as how the
consortium will make sure that it will comply with FAIR data principles.
Almost all project partners within the consortium are owners or/and producers
of data while some will just process data that others generate. This implies
specific responsibilities, which are extensively described in this report.
The RESIST Data Management Plan emphasises strongly on the appropriate
collection – and publication if the data will be published – of metadata,
storing all the necessary information for the optimal use and reuse of those
datasets in compliance with all regulations.
This deliverable is still considered a living document and it will continue to
be updated for the duration of the project in order to remain as current as
possible. .
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1333_PANOPTIS_769129.md
|
**Executive Summary**
The PANOPTIS Data Management Plan is a living document that will be updated
where necessary. It describes the way the data of the project are managed
during the project duration and beyond. The objective of the data management
plans is that all types of data useful to the project (and other projects as
well) are clearly identified, FAIR (easily Findable, openly Accessible,
Interoperable and Re-usable), that they don’t raise any ethical or security
concern.
This version identifies the topics that need to be addressed in the data
management plan for the first reporting period of the project. It is the same
plan as V1.2 of the preliminary data management plan as there was no
modification of the system design since the delivery of the latest.
# Data Summary
The project is built on three main pillars, namely:
* Elaboration of precise forecasts (weather essentially but also other hazards when predictable),
* Elaboration of the vulnerabilities for the Road Infrastructure (RI) components, Monitoring of the RI status.
The data that will be collected and generated after processing fall in these
domains. An important aspect of PANOPTIS is the monitoring over the time of
the events and their effects on the Road Infrastructure (RI). So, both for
deep learning method and for statistics, the data have to be kept for several
years. Typically, we need data from the last ten years and data over the whole
duration of the project (4 years).
The origin of the data is the sensors and processing systems that can provide
a description of the environment and detect events that can threaten the RI.
Among these sensors and processing systems, there are:
* Satellites: EO/IR images for macroscopic events (flood, landslides, etc.) and SAR for smaller events (regular ground move).
* UAVs: In PANOPTIS, the UAVs are equipped with various types of cameras depending on the defects that need to be detected (EO/IR, multi-spectral, hyperspectral) and LIDARs to elaborate 3D maps. The size of the data base collected for the project will be quite huge because it will be thousands of high resolution pictures taken from the project and additionally pictures from external data bases to train the detection algorithms.
* Weather data: again a huge volume of data as the size of the base area to compute the forecast will be small.
* Hazard data: content and size depends on the hazards. In general they are under the form of hazard maps with different colours depending on the probability of occurrence and the resulting severity.
* Vulnerability data: these data will combine the descriptive data for the road and supporting infrastructure (bridges, tunnel, etc.). On the 3D map, the defects will be super-imposed (results of inspections and status assessment). The volume of data is once again dependent on the type of infrastructure (from the most simple which is the road directly built on the terrain to the more complex bridges).
The project will create data:
* WP3 will compute weather forecast/hazard forecast which will be stored as maps with additional free text comments.
* WP4 will elaborate the vulnerability of the roads and their supports.
* WP5 will collect the sensors of the data and pre-process them.
* WP6 will fuse the data to produce a Common Operational Picture (maps with risk, events, objects) completed by HRAP for decision support.
As the system capabilities are optimized with the data and statistics from
previous events, the data have to stay in the archives for a very long period
of time (at least during the whole life of the components).
The data related to the Road Infrastructure belong to the management agencies,
namely ACCIONA and Egnatia Odos. Any additional use that could be done of
these data has to be approved by them.
The data collected and processed from external services (weather, environment)
will be protected as per the respective contracts clauses with this external
services. The data cycle is the following one (EUDAT – OpenAIRE):
At each step of the cycle, the IPRs and contractual clauses need to be
respected. In particular: who owns these data, is the process applied to these
data allowed, where will the data be stored and during how much time, who can
have access to these data, to do what?
# FAIR data
## Making data findable, including provisions for metadata
The data produced in the project will be discoverable with metadata. The
majority of the data used and produced by the project will be time-stamped,
geo-referenced and classified (generally type of defects). The following
scheme shows the types of data that will be collected by the system with the
in situ sensors. The rest of the collected data will be provided by the UAVs
and the satellites.
The UAV are equipped with cameras (EO/IR) so the data are images with their
respective metadata. To create accurate 3D maps, the UAVs can also be equipped
with Lidars and in this case, the data will be a cloud of points.
In Panoptis, two types of satellites instruments will be used:
* Cameras (visible images) which will be processed like UAV images but to detect more macroscopic events (floods, landslides, collapses of bridges, mountains rubbles, etc.). The images will be provided by SENTINEL 2 or SPOT 6/7.
* SAR (Synthetic Aperture Radar): radar images to detect small movements. The radar images will be provided by SENTINEL 3 (SENTINEL 1 has not enough precision to identify the changes that are interesting for PANOPTIS. The detailed list of the data used and processed in PANOPTIS is provided herebelow.
<table>
<tr>
<th>
**DATASET NAME**
</th>
<th>
**Data from SHM sensors**
</th> </tr>
<tr>
<td>
Data Identification
</td>
<td>
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
Referring to data from sensors installed in the demo sites for monitoring
structural health of the different Road Infrastructures (RI). Can be of
geotechnical focus in the Greek site
(inclinometers, accelerometers, seismographs, etc.), and corrosion sensors in
Reinforced Concrete (RC) in the Spanish site.
</td> </tr>
<tr>
<td>
Source
</td>
<td>
Direct insitu measurements (Spanish and Greek demosites). Accessible from
local legacy data acquisition systems
</td> </tr>
<tr>
<td>
Partners activities and responsibilities
</td>
<td>
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable)
</td>
<td>
ACCIONA (Spanish demosite), EOAE (Greek demosite)
</td> </tr>
<tr>
<td>
Partner in charge of data collection
</td>
<td>
ACCIONA (Spanish demosite), EOAE (Greek demosite)
</td> </tr>
<tr>
<td>
Partner in charge of data analysis
</td>
<td>
WP4 partners (IFS, NTUA, SOF, C4controls, AUTH, ITC)
</td> </tr>
<tr>
<td>
Partner in charge of data storage
</td>
<td>
ACCIONA and EOAE
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP4, (all tasks), WP7 (Task 7.5)
</td> </tr>
<tr>
<td>
Standards
</td>
<td>
</td> </tr>
<tr>
<td>
Info about metadata (production and
storage dates, places) and documentation?
</td>
<td>
* Geotechnical data: Angle of frictio n, Cohesion, Dry unit weigh t, Young's modulus, Void ratio, Soil Permeability coefficient, Soil porosity, Soil bearing capacity.
* Corrosion data. The wireless sensors located on multiple monitoring points provide electrical parameters such as corrosion current density (iCORR), electrical resistance of concrete (RS) of the system, and the double layer capacity (CDL) to a unique electronic system. The information directly stored by the
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
electronic system consists of raw data of sensors (electrical response). In
order to transform these primary data into profitable monitoring information a
specific computer tool based the R software belonging to the R Development
Core Team is used. This application allows to execute the data analysis
process in a fast and automated way. As a result, a series of easily
interpretable graphs are obtained. All the monitoring graphics are updated
daily in an automated way and are available from any of the computers linked
to the system.
</th> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
* **Geotechnical sensors:** Settlement cells ,
Vertical Inclinometer s, Horizontal
Inclinomete r, Rod extensometer,
Standpipe Piezometer, Pneumatic Piezometer
* **Corrosion sensors:** extension R, .rda, .Rdata. Graphs updated every day during the demo period (foreseen period of 2 years)
</td> </tr>
<tr>
<td>
Data exploitation and sharing
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
* Feed geotechnical model of cut-slope located at active landslide region (Greek site)
* Feed structural models of bridges (Greek site)
* Feed corrosion model of reinforced concrete underpass (Spanish site)
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level: confidential (only for members of
the
Consortium and the Commission
Services) or Public
</td>
<td>
Confidential (only for members of the Consortium and the Commission Services)
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
* **Geotechnical sensors:** Settlement cell s , Ve rtical Inclinometer s, Horizontal Inclinomete r, Rod extensometer, Standpipe
Piezometer, Pneumatic Piezometer
* Corrosion sensors: During the project any computer from PANOPTIS partners involved can be linked to the local measurement system. PANOPTIS system will be as well connected to the local monitoring system.
These data shall not be disclosed, by any means whatsoever, in whole or in
part. However, publication and dissemination of these data is possible after
previous approval by ACCIONA/EOAE. Prior notice of any planned publication
shall be given to ACCIONA/EOAE
</td> </tr>
<tr>
<td>
</td>
<td>
at least 45 calendar days before the publication
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
no
</td> </tr>
<tr>
<td>
Personal data protection: are they personal data? If so, have you gained
(written) consent from data subjects to collect this information?
</td>
<td>
No
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td> </tr>
<tr>
<td>
Data storage (including backup): Where? For how long?
</td>
<td>
ACCIONA, EOAE control centres. PANOPTIS backup system. Information generated
during the project for at least 4 years after the project in the project
repository.
</td> </tr> </table>
<table>
<tr>
<th>
DATASET NAME
</th>
<th>
Data from weather stations and pavement sensors
</th> </tr>
<tr>
<td>
Data Identification
</td>
<td>
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
Local weather data coming from legacy weather stations (belonging to end-
users) and new PANOPTIS micro weather stations. Main parameters: Temperature,
relative humidity, pavement temperature, pavement humidity, wind speed, wind
direction, rain precipitations, presence of ice, chemical concentration,
freeze point of solution on the surface.
</td> </tr>
<tr>
<td>
Source
</td>
<td>
In situ measurements of weather stations.
Accessible from local legacy data acquisition
</td> </tr>
<tr>
<td>
Partners activities and responsibilities
</td>
<td>
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable)
</td>
<td>
ACCIONA and EOAE
</td> </tr>
<tr>
<td>
Partner in charge of data collection
</td>
<td>
ACCIONA and EOAE
</td> </tr>
<tr>
<td>
Partner in charge of data analysis
</td>
<td>
FINT, AUTH, HYDS, FMI, IFS
</td> </tr>
<tr>
<td>
Partner in charge of data storage
</td>
<td>
ACCIONA, EOAE
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP3 (Tasks 3.5, 3.6, 3.7), WP4 (Tasks 4.1, 4.2, 4.3,
4.4), WP7 (Task 7.5), WP2 (Task 2.4 and Task 2.5)
</td> </tr>
<tr>
<td>
Standards
</td>
<td>
</td> </tr>
<tr>
<td>
Info about metadata (production and
storage dates, places) and documentation?
</td>
<td>
Data is produced online, in real time, every 3 hours (although the frequency
can be adapted), and stored at ACCIONA/EOAE legacy data acquisition system.
</td> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
Data can be downloaded from the end-users legacy data management tool in form
of pdf., xlsx., doc. The selection of specific date ranges and parameters is
possible. Size of data depends on the date range and number of parameters
selected (various kB-MB per file).
</td> </tr>
<tr>
<td>
Data exploitation and sharing
</td>
<td>
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
Providing real-time information of the weather conditions and forecasts for
the DSS.
</td> </tr>
<tr>
<td>
</td>
<td>
* Update climatic models
* Update risk models
* Update ice prone areas on the road surface for winter operations management
* Rain precipitations data is fed to geotechnical and erosion models of slopes
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level: confidential (only for members of
the
Consortium and the Commission
Services) or Public
</td>
<td>
Confidential (only for members of the Consortium and the Commission Services)
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
PANOPTIS partners can access to the data via ACCIONA and EAOE legacy data
acquisition during the project. At some point of the project, weather stations
will transfer data online to PANOPTIS system.
ACCIONA/EOAE must always authorise dissemination and publication of data
generated with legacy systems (existing weather stations). It is historic
data, it is not generated for the project. Publication and dissemination of
data from PANOPTIS micro weather stations must be approved by ACCIONA/EOAE
Prior notice of any planned publication shall be given to ACCIONA/EOAE at
least 45 calendar days before the publication. The use of data from PANOPTIS
microweather stations for any other purposes shall be considered a breach of
this Agreement.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
no
</td> </tr>
<tr>
<td>
Personal data protection: are they personal data? If so, have you gained
(written) consent from data subjects to collect this information?
</td>
<td>
No
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td> </tr>
<tr>
<td>
Data storage (including backup): Where? For how long?
</td>
<td>
ACCIONA and EOAE control centres. Data generated during the project, must be
stored at least for 4 years
</td> </tr> </table>
<table>
<tr>
<th>
DATASET NAME
</th>
<th>
Thermal map of Spanish A2 Highway (pk 62-pk 139.5)
</th> </tr>
<tr>
<td>
Data Identification
</td>
<td>
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
Thermal profile of the road surface; thermal characteristics per georeferenced
zone along the road corridor
</td> </tr>
<tr>
<td>
Source
</td>
<td>
ACCIONA data base
</td> </tr>
<tr>
<td>
Partners activities and responsibilities
</td>
<td>
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable)
</td>
<td>
ACCIONA
</td> </tr>
<tr>
<td>
Partner in charge of data collection
</td>
<td>
ACCIONA
</td> </tr>
<tr>
<td>
Partner in charge of data analysis
</td>
<td>
IFS, FMI, HYDS, AUTH, ITC
</td> </tr>
<tr>
<td>
Partner in charge of data storage
</td>
<td>
ACCIONA
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP3 (Tasks 3.5, 3.6, 3.7), WP2 (task 2.5), WP4 (Tasks 4.1, 4.3)
</td> </tr>
<tr>
<td>
Standards
</td> </tr>
<tr>
<td>
Info about metadata (production and
storage dates, places) and documentation?
</td>
<td>
Test performed under request
</td> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
Kmz. 138 kB
</td> </tr>
<tr>
<td>
Data exploitation and sharing
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
Identify ice-prone areas on the road corridor (vulnerable RI). This areas
should be equipped with sensors to control ice formation
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level: confidential (only for members of
the
Consortium and the Commission
Services) or Public
</td>
<td>
Only for members of the Consortium and the Commission Services)
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
These data shall not be disclosed, by any means whatsoever, in whole or in
part. However publication and dissemination of these data is possible after
previous approval by ACCIONA. Prior notice of any planned publication shall be
given to ACCIONA at least 45 calendar days before the publication
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
no
</td> </tr>
<tr>
<td>
Personal data protection: are they personal data? If so, have you gained
(written) consent from data subjects to collect this information?
</td>
<td>
No
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td> </tr>
<tr>
<td>
Data storage (including backup): Where? For how long?
</td>
<td>
ACCIONA, control centre, for the duration of the concession contract
</td> </tr> </table>
<table>
<tr>
<th>
DATASET NAME
</th>
<th>
UAV data
</th> </tr>
<tr>
<td>
Data Identification
</td>
<td>
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
Data taken in UAV missions, comprising all the datasets obtained with the
different kind of sensors (RGB, LiDAR, IR, etc.) used in the project
</td> </tr>
<tr>
<td>
Source
</td>
<td>
ACCIONA acquisitions, ITC acquisitions
</td> </tr>
<tr>
<td>
Partners activities and responsibilities
</td>
<td>
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable)
</td>
<td>
ACCIONA, EOAE
</td> </tr>
<tr>
<td>
Partner in charge of data collection
</td>
<td>
ITC, ACCIONA, EOAE
</td> </tr>
<tr>
<td>
Partner in charge of data analysis
</td>
<td>
ITC, NTUA
</td> </tr>
<tr>
<td>
Partner in charge of data storage
</td>
<td>
ACCIONA, EOAE
</td> </tr> </table>
<table>
<tr>
<th>
Related WP(s) and task(s)
</th>
<th>
WP5, WP4(4.5), WP7 (Task 7.5)
</th> </tr>
<tr>
<td>
Standards
</td> </tr>
<tr>
<td>
Info about metadata (production and
storage dates, places) and documentation?
</td>
<td>
Data is produced under scheduled mission and shared with end users and WP5
partners for processing.
Metadata should include:
* Date/time of data acquisition
* Coordinate system information
* Information of UAV system (camera info, flight height, titl/angle of camera)
</td> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
Depending on the sensor used:
* Optical: Images/video.JPEG, MP4,
* Multispectral; Images
* Thermal infrared : Images/ video JPEG, .TIFF, .MJPEG
* Point cloud: ASCII
Estimated volume of images and videos depend on number and size of inspected
road corridor elements. Could range from one to couple of hundreds of GB.
</td> </tr>
<tr>
<td>
Data exploitation and sharing
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
* Inspection and degradation assessment of road infrastructure: including slopes erosion; road pavement degradation; cracks in concrete bridges/underpasses, overpasses; degradation of road furniture; vegetation encroaching;
corrosion of steel elements
* 3D modelling of road infrastructure
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level: confidential (only for members of
the
Consortium and the Commission
Services) or Public
</td>
<td>
confidential (only for members of the Consortium and the Commission Services)
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
These data shall not be disclosed, by any means whatsoever, in whole or in
part. However publication and dissemination of these data is possible after
previous approval by ACCIONA/EOAE. Prior notice of any planned publication
shall be given to ACCIONA/EOAE at least 45 calendar days before the
publication
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
No
</td> </tr>
<tr>
<td>
Personal data protection: are they personal data? If so, have you gained
(written) consent from data subjects to collect this information?
</td>
<td>
No
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td> </tr>
<tr>
<td>
Data storage (including backup): Where?
</td>
<td>
PANOPTIS backup system, during 4 years following
</td> </tr>
<tr>
<td>
For how long?
</td>
<td>
the end of the project
</td> </tr> </table>
<table>
<tr>
<th>
DATASET NAME
</th>
<th>
RGB camera data
</th> </tr>
<tr>
<td>
Data Identification
</td>
<td>
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
Imagery from fix camera monitoring the soil erosion on slope pk 64 of A2
Highway (Spanish demo)
</td> </tr>
<tr>
<td>
Source
</td>
<td>
ACCIONA fix camera (to be installed within the project). Accessible from local
legacy data acquisition and to be accessible from PANOPTIS systems (online).
</td> </tr>
<tr>
<td>
Partners activities and responsibilities
</td>
<td>
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable)
</td>
<td>
ACCIONA
</td> </tr>
<tr>
<td>
Partner in charge of data collection
</td>
<td>
ACCIONA
</td> </tr>
<tr>
<td>
Partner in charge of data analysis
</td>
<td>
NTUA
</td> </tr>
<tr>
<td>
Partner in charge of data storage
</td>
<td>
NTUA
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP4
</td> </tr>
<tr>
<td>
Standards
</td>
<td>
</td> </tr>
<tr>
<td>
Info about metadata (production and
storage dates, places) and documentation?
</td>
<td>
Production of data in continuous data stream, data is sent online and stored
in PANOPTIS system and ACCIONA legacy data management system.
</td> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
High quality images JPEG Continuous data stream
</td> </tr>
<tr>
<td>
Data exploitation and sharing
</td>
<td>
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
An empirical approach can be applied for erosion of slopes, comparing data on
local water precipitation (from micro weather stations) with volume of soil
erosion (from RGB camera).
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level: confidential (only for members of
the
Consortium and the Commission
Services) or Public
</td>
<td>
confidential (only for members of the Consortium and the Commission Services)
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
These data shall not be disclosed, by any means whatsoever, in whole or in
part. However publication and dissemination of these data is possible after
previous approval by ACCIONA. Prior notice of any planned publication shall be
given to ACCIONA at least 45 calendar days before the publication
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
No
</td> </tr>
<tr>
<td>
Personal data protection: are they personal data? If so, have you gained
(written) consent from data subjects to collect this information?
</td>
<td>
Yes, occasionally, when any operation is carried out by the concessionary
staff.
The consent will be managed when necessary.
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td> </tr>
<tr>
<td>
Data storage (including backup): Where? For how long?
</td>
<td>
Data storage in PANOPTIS system for at least 4 years after the end of the
project
</td> </tr> </table>
<table>
<tr>
<th>
DATASET NAME
</th>
<th>
Videos of road surface and road assets
</th> </tr>
<tr>
<td>
Data Identification
</td>
<td>
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
Videos of road surface and road assets taken with 360-degree camera (Garmin
VIRB 1 ) by ACCIONA
</td> </tr>
<tr>
<td>
Source
</td>
<td>
ACCIONA database
</td> </tr>
<tr>
<td>
Partners activities and responsibilities
</td>
<td>
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable)
</td>
<td>
ACCIONA
</td> </tr>
<tr>
<td>
Partner in charge of data collection
</td>
<td>
ACCIONA
</td> </tr>
<tr>
<td>
Partner in charge of data analysis
</td>
<td>
ITC
</td> </tr>
<tr>
<td>
Partner in charge of data storage
</td>
<td>
ITC
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP5
</td> </tr>
<tr>
<td>
Standards
</td>
<td>
</td> </tr>
<tr>
<td>
Info about metadata (production and
storage dates, places) and documentation?
</td>
<td>
Videos are acquired by ACCIONA every 1 month and shared with involved partners
(ITC) via file sharing service for processing. Software for editing videos
VIRB 360:
_https://www.youtube.com/watch?v=COItl8HDEko_
</td> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
Mp4
Video raw mode 5K (2 files at 2496 × 2496 px) 5.7K (2 files at 2880 x 2880)
</td> </tr>
<tr>
<td>
Data exploitation and sharing
</td>
<td>
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
Road surface image analysis for deterioration assessment
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level: confidential (only for members of
the
Consortium and the Commission
Services) or Public
</td>
<td>
confidential (only for members of the Consortium and the Commission Services)
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
Dissemination of this data must be always authorised by ACCIONA, (it is
historic data, it is not produced for the project). Prior notice of any
planned publication shall be given to ACCIONA at least 45 calendar days before
the publication. The use of Confidential Information for any other purposes
shall be considered a breach of this Agreement.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
No
</td> </tr>
<tr>
<td>
Personal data protection: are they personal data? If so, have you gained
</td>
<td>
Yes, occasionally, when any operation is carried out by the concessionary
staff.
</td> </tr>
<tr>
<td>
(written) consent from data subjects to collect this information?
</td>
<td>
The consent will be managed when necessary.
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td> </tr>
<tr>
<td>
Data storage (including backup): Where? For how long?
</td>
<td>
Data storage in PANOPTIS system for at least 4 years after the end of the
project
</td> </tr> </table>
<table>
<tr>
<th>
DATASET NAME
</th>
<th>
Data Laser Crack Measurement System (LCMS)
</th> </tr>
<tr>
<td>
Data Identification
</td>
<td>
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
3D (point cloud) data of the road which is labelled by LCMS system. Cracking
tests results
</td> </tr>
<tr>
<td>
Source
</td>
<td>
ACCIONA data base (inspection test separate of the project)
</td> </tr>
<tr>
<td>
Partners activities and responsibilities
</td>
<td>
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable)
</td>
<td>
ACCIONA
</td> </tr>
<tr>
<td>
Partner in charge of data collection
</td>
<td>
ACCIONA
</td> </tr>
<tr>
<td>
Partner in charge of data analysis
</td>
<td>
ITC
</td> </tr>
<tr>
<td>
Partner in charge of data storage
</td>
<td>
ACCIONA
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP5
</td> </tr>
<tr>
<td>
Standards
</td>
<td>
</td> </tr>
<tr>
<td>
Info about metadata (production and
storage dates, places) and documentation?
</td>
<td>
Data is obtained under scheduled inspection mission, and stored at ACCIONA
control centre. ACCIONA shares results with image analysis
experts of the project via file sharing service
</td> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
Point cloud ASCII, .ply, .las, .pts x, y, z information (coordinates)
Excel file summarising cracking results on the corridor.
</td> </tr>
<tr>
<td>
Data exploitation and sharing
</td>
<td>
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
3D information of road surface distresses for deterioration assessment
(quantification of damage).
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level: confidential (only for members of
the
Consortium and the Commission
Services) or Public
</td>
<td>
confidential (only for members of the Consortium and the Commission Services)
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
Dissemination of this data must be always authorised by ACCIONA, (it is
historic data, it is not produced for the project). Prior notice of any
planned publication shall be given to ACCIONA at least 45 calendar days before
the publication. The use of Confidential Information for any other purposes
shall be considered a breach of this Agreement
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
No
</td> </tr>
<tr>
<td>
Personal data protection: are they personal data? If so, have you gained
(written) consent from data subjects to
</td>
<td>
No
</td> </tr>
<tr>
<td>
collect this information?
</td>
<td>
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td> </tr>
<tr>
<td>
Data storage (including backup): Where? For how long?
</td>
<td>
ACCIONA data base during the duration of the highway concession contract
</td> </tr> </table>
<table>
<tr>
<th>
DATASET NAME
</th>
<th>
3D scan data using Terrestrial Laser Scanner system.
</th> </tr>
<tr>
<td>
Data Identification
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
3D scan data (point cloud) of slopes in Spanish A2 highway using Trimble sx10
scanning total station
</td> </tr>
<tr>
<td>
Source
</td>
<td>
ACCIONA database
</td> </tr>
<tr>
<td>
Partners activities and responsibilities
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable)
</td>
<td>
ACCIONA
</td> </tr>
<tr>
<td>
Partner in charge of data collection
</td>
<td>
ACCIONA
</td> </tr>
<tr>
<td>
Partner in charge of data analysis
</td>
<td>
ITC
</td> </tr>
<tr>
<td>
Partner in charge of data storage
</td>
<td>
ACCIONA
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP5
</td> </tr>
<tr>
<td>
Standards
</td> </tr>
<tr>
<td>
Info about metadata (production and
storage dates, places) and documentation?
</td>
<td>
Data acquired under scheduled mission by ACCIONA, stored in ACCIONA database
and shared with PANOPTIS image analysis experts via file sharing service
</td> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
Point cloud ASCII 1 to 5 GB/scan.
</td> </tr>
<tr>
<td>
Data exploitation and sharing
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
3D model of slopes for high precision monitoring of soil erosion and
landslides with time (evolution of 3D models with time)
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level: confidential (only for members of
the
Consortium and the Commission
Services) or Public
</td>
<td>
confidential (only for members of the Consortium and the Commission Services)
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
These data shall not be disclosed, by any means whatsoever, in whole or in
part. However publication and dissemination of these data is possible after
previous approval by ACCIONA. Prior notice of any planned publication shall be
given to ACCIONA/EOAE at least 45 calendar days before the publication
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
No
</td> </tr>
<tr>
<td>
Personal data protection: are they personal data? If so, have you gained
(written) consent from data subjects to collect this information?
</td>
<td>
no
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td> </tr>
<tr>
<td>
Data storage (including backup): Where?
</td>
<td>
ACCIONA Control centre, until the end of the
</td> </tr>
<tr>
<td>
For how long?
</td>
<td>
concession contract.
</td> </tr> </table>
<table>
<tr>
<th>
DATASET NAME
</th>
<th>
Results of inspection tests on RI
</th> </tr>
<tr>
<td>
Data Identification
</td>
<td>
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
Results of inspection tests performed out of the scope of the project, but
used in the project. For instance for road surface: IRI results, slip
resistance, transverse evenness, strength properties, macrotexture; results of
bridges
inspections, results of slopes inspections
</td> </tr>
<tr>
<td>
Source
</td>
<td>
ACCIONA/EOAE data base
</td> </tr>
<tr>
<td>
Partners activities and responsibilities
</td>
<td>
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable)
</td>
<td>
ACCIONA/EOAE
</td> </tr>
<tr>
<td>
Partner in charge of data collection
</td>
<td>
ACCIONA/EOAE
</td> </tr>
<tr>
<td>
Partner in charge of data analysis
</td>
<td>
ITC, IFS
</td> </tr>
<tr>
<td>
Partner in charge of data storage
</td>
<td>
ACCIONA/EOAE
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP5, WP4
</td> </tr>
<tr>
<td>
Standards
</td>
<td>
</td> </tr>
<tr>
<td>
Info about metadata (production and
storage dates, places) and documentation?
</td>
<td>
Inspection tests are performed according to a year planning. For instance IRI
tests, 2 times per year, slip resistance of the road service is tested 3 times
per year + additional time every 2 years. The data produced is stored at
ACCIONA/EOAE legacy data management system and shared with PANOPTIS partners
involved under request.
</td> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
Format and size is specific for each test. Results can are presented in form
of report (xslx., pdf.)
</td> </tr>
<tr>
<td>
Data exploitation and sharing
</td>
<td>
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
Vulnerability analysis
Input for deterioration analysis via image analysis
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level: confidential (only for members of
the
Consortium and the Commission
Services) or Public
</td>
<td>
confidential (only for members of the Consortium and the Commission Services)
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
Dissemination of this data must be always authorised by ACCIONA/EOAE, (it is
historic data, it is not produced for the project). Prior notice of any
planned publication shall be given to ACCIONA/EOAE at least 45 calendar days
before the publication. The use of Confidential Information for any other
purposes shall be considered a breach of this Agreement.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
No
</td> </tr>
<tr>
<td>
Personal data protection: are they personal data? If so, have you gained
(written) consent from data subjects to collect this information?
</td>
<td>
No
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td> </tr>
<tr>
<td>
Data storage (including backup): Where? For how long?
</td>
<td>
ACCIONA/EOAE legacy data management system, until at least the end of the
concession contract
</td> </tr> </table>
<table>
<tr>
<th>
DATASET NAME
</th>
<th>
Historic inventories of events in the demosites
</th> </tr>
<tr>
<td>
Data Identification
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
Incidences, accidents, procedures applied, lessons learnt
</td> </tr>
<tr>
<td>
Source
</td>
<td>
ACCIONA and EOAE database
</td> </tr>
<tr>
<td>
Partners activities and responsibilities
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable)
</td>
<td>
ACCIONA, EOAE
</td> </tr>
<tr>
<td>
Partner in charge of data collection
</td>
<td>
ACCIONA, EOAE
</td> </tr>
<tr>
<td>
Partner in charge of data analysis
</td>
<td>
IFS
</td> </tr>
<tr>
<td>
Partner in charge of data storage
</td>
<td>
ACCIONA, EOAE
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP4
</td> </tr>
<tr>
<td>
Standards
</td> </tr>
<tr>
<td>
Info about metadata (production and
storage dates, places) and documentation?
</td>
<td>
Inventory of historical data (actuations, accidents, incidences, etc.)
</td> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
Report in xlsx. or pdf. format
</td> </tr>
<tr>
<td>
Data exploitation and sharing
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
Vulnerability analysis
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level: confidential (only for members of
the
Consortium and the Commission
Services) or Public
</td>
<td>
Dissemination level: confidential (only for members of the Consortium and the
Commission
Services).
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
Dissemination of this data must be always authorised by ACCIONA/EOAE, (it is
historic data, it is not produced for the project). Prior notice of any
planned publication shall be given to ACCIONA/EOAE at least 45 calendar days
before the publication. The use of Confidential Information for any other
purposes shall be considered a breach of this Agreement.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
No
</td> </tr>
<tr>
<td>
Personal data protection: are they personal data? If so, have you gained
(written) consent from data subjects to collect this information?
</td>
<td>
no
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td> </tr>
<tr>
<td>
Data storage (including backup): Where? For how long?
</td>
<td>
ACCIONA/EOAE database, at least until the end of the concession project
</td> </tr> </table>
<table>
<tr>
<th>
DATASET NAME
</th>
<th>
Data winter operations
</th> </tr> </table>
<table>
<tr>
<th>
Data Identification
</th> </tr>
<tr>
<td>
Dataset description
</td>
<td>
Preventive and curative protocols applied on the road surface (salt/brine use
per GPS location) for the last winter seasons
</td> </tr>
<tr>
<td>
Source
</td>
<td>
ACCIONA/ EOAE database
</td> </tr>
<tr>
<td>
Partners activities and responsibilities
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable)
</td>
<td>
ACCIONA/ EOAE
</td> </tr>
<tr>
<td>
Partner in charge of data collection
</td>
<td>
ACCIONA/ EOAE
</td> </tr>
<tr>
<td>
Partner in charge of data analysis
</td>
<td>
IFS
</td> </tr>
<tr>
<td>
Partner in charge of data storage
</td>
<td>
ACCIONA/ EOAE
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP4, WP7 (Task 7.5)
</td> </tr>
<tr>
<td>
Standards
</td> </tr>
<tr>
<td>
Info about metadata (production and
storage dates, places) and documentation?
</td>
<td>
An inventory of the winter operations carried out, including salt/brine
spreading and removal of snow from the road surface is produced every day in
which any action is performed (the anti-icing protocol is activated). The
inventory reports the area affected (km range) and the exact time/date. All
the information is stored in the data management tool of the end-users. The
information is shared under request with the PANOPTIS partners involved.
</td> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
Daily or Yearly reports detailing daily actions are emitted in form of pdf. or
xlsx. (hundred of kB).
</td> </tr>
<tr>
<td>
Data exploitation and sharing
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
* Relate the use of salt/brine for deicing operations with pavement deterioration, reinforcement of reinforced concrete corrosion
* Create models to optimise the use of deicing agents in winter operations
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level: confidential (only for members of
the
Consortium and the Commission
Services) or Public
</td>
<td>
Dissemination level: confidential (only for members of the Consortium and the
Commission Services)
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
Dissemination of this data must be always authorised by ACCIONA/EOAE, (it is
historic data, it is not produced for the project). Prior notice of any
planned publication shall be given to ACCIONA/EOAE at least 45 calendar days
before the publication. The use of Confidential Information for any other
purposes shall be considered a breach of this Agreement.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
No
</td> </tr>
<tr>
<td>
Personal data protection: are they personal data? If so, have you gained
(written) consent from data subjects to collect this information?
</td>
<td>
No
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td> </tr>
<tr>
<td>
Data storage (including backup): Where? For how long?
</td>
<td>
ACCIONA/EOAE database, at least until the end of the concession project
</td> </tr> </table>
<table>
<tr>
<th>
DATASET NAME
</th>
<th>
Design details of the road corridor of Spanish A2 Highway and Greek Egnatia
Odos Highway
</th> </tr>
<tr>
<td>
Data Identification
</td>
<td>
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
Inventory, location and design of road infrastructure, slopes, ditches,
transverse drainage works, road sections, road signs. Drawings, geometry,
topography, DTM, DSM, geotechnical surveys of the RI.
</td> </tr>
<tr>
<td>
Source
</td>
<td>
Project as built, Rehabilitation projects, data base of the Conservation
Agency
</td> </tr>
<tr>
<td>
Partners activities and responsibilities
</td>
<td>
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable)
</td>
<td>
ACCIONA/EOAE
</td> </tr>
<tr>
<td>
Partner in charge of data collection
</td>
<td>
ACCIONA/EOAE
</td> </tr>
<tr>
<td>
Partner in charge of data analysis
</td>
<td>
IFS, AUTH, NTUA
</td> </tr>
<tr>
<td>
Partner in charge of data storage
</td>
<td>
ACCIONA/EOAE
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP3, WP4
</td> </tr>
<tr>
<td>
Standards
</td>
<td>
</td> </tr>
<tr>
<td>
Info about metadata (production and
storage dates, places) and documentation?
</td>
<td>
Historic data of the end-users, stored in the control centres. It is shared
with PANOPTIS partners under request.
</td> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
Format and weight depends of the file. Some indicative information below:
* Designs in dwg. various Mb
* Topography in dwg. various Mb
* Geotechnical surveys (report pdf.) various Mb.
</td> </tr>
<tr>
<td>
Data exploitation and sharing
</td>
<td>
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
Models of the RI under study
Information for vulnerability and risk analysis
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level: confidential (only for members of
the
Consortium and the Commission
Services) or Public
</td>
<td>
Dissemination level: confidential (only for members of the Consortium and the
Commission Services)
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
Dissemination of this data must be always authorised by ACCIONA/EOAE, (it is
historic data, it is not produced for the project). Prior notice of any
planned publication shall be given to ACCIONA/EOAE at least 45 calendar days
before the publication. The use of Confidential Information for any other
purposes shall be considered a breach of this Agreement.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
No
</td> </tr>
<tr>
<td>
Personal data protection: are they personal data? If so, have you gained
(written) consent from data subjects to collect this information?
</td>
<td>
No
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td> </tr>
<tr>
<td>
Data storage (including backup): Where? For how long?
</td>
<td>
ACCIONA/EOAE
</td> </tr> </table>
<table>
<tr>
<th>
DATASET NAME
</th>
<th>
CCTV
</th> </tr>
<tr>
<td>
Data Identification
</td>
<td>
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
Imagery of CCTV installed on the road corridor
</td> </tr>
<tr>
<td>
Source
</td>
<td>
ACCIONA/EOAE legacy data acquisition systems
</td> </tr>
<tr>
<td>
Partners activities and responsibilities
</td>
<td>
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable)
</td>
<td>
ACCIONA/EOAE
</td> </tr>
<tr>
<td>
Partner in charge of data collection
</td>
<td>
ACCIONA/EOAE
</td> </tr>
<tr>
<td>
Partner in charge of data analysis
</td>
<td>
NTUA, C4C
</td> </tr>
<tr>
<td>
Partner in charge of data storage
</td>
<td>
ACCIONA/EOAE
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP4, WP5, WP7 (Task 7.5)
</td> </tr>
<tr>
<td>
Standards
</td>
<td>
</td> </tr>
<tr>
<td>
Info about metadata (production and
storage dates, places) and documentation?
</td>
<td>
Spanish A2T2, images are currently taken online every 5 minutes. Data is
accessible online in the legacy data management tool.
Egnatia Odos motorway images.
</td> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
Accessible online via legacy data management tool of the end-users.
</td> </tr>
<tr>
<td>
Data exploitation and sharing
</td>
<td>
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
Model the road corridor
Vehicle information in real time (risk, and impact analysis)
Feed for the DSS module
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level: confidential (only for members of
the
Consortium and the Commission
Services) or Public
</td>
<td>
Dissemination level: confidential (only for members of the Consortium and the
Commission Services)
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
Dissemination of this data must be always authorised by ACCIONA/EOAE, (it is
historic data, it is not produced for the project). Prior notice of any
planned publication shall be given to ACCIONA/EOAE at least 45 calendar days
before the publication. The use of Confidential Information for any other
purposes shall be considered a breach of this Agreement.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
No
</td> </tr>
<tr>
<td>
Personal data protection: are they personal data? If so, have you gained
(written) consent from data subjects to
</td>
<td>
</td> </tr>
<tr>
<td>
collect this information?
</td>
<td>
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td> </tr>
<tr>
<td>
Data storage (including backup): Where? For how long?
</td>
<td>
ACCIONA/EOAE database, at least until the end of the concession project
</td> </tr> </table>
<table>
<tr>
<th>
DATASET NAME
</th>
<th>
Traffic information
</th> </tr>
<tr>
<td>
Data Identification
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
Traffic intensity per hour, per vehicle class (light or heavy), per direction
</td> </tr>
<tr>
<td>
Source
</td>
<td>
ACCIONA/EOAE control centres (legacy data management tool)
</td> </tr>
<tr>
<td>
Partners activities and responsibilities
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable)
</td>
<td>
ACCIONA/EOAE
</td> </tr>
<tr>
<td>
Partner in charge of data collection
</td>
<td>
ACCIONA/EOAE
</td> </tr>
<tr>
<td>
Partner in charge of data analysis
</td>
<td>
NTUA, IFS, C4C
</td> </tr>
<tr>
<td>
Partner in charge of data storage
</td>
<td>
ACCIONA/EOAE
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP2 (Task 2.5), WP4 ,WP7 (Task 7.5)
</td> </tr>
<tr>
<td>
Standards
</td> </tr>
<tr>
<td>
Info about metadata (production and
storage dates, places) and documentation?
</td>
<td>
Information is produced in real time on line. PANOPTIS partners can access via
legacy data management tool.
</td> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
Accessible online via legacy data management tool of the end-users.
</td> </tr>
<tr>
<td>
Data exploitation and sharing
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
Data used for vulnerability, risk and impact analysis
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level: confidential (only for members of
the
Consortium and the Commission
Services) or Public
</td>
<td>
Dissemination level: confidential (only for members of the Consortium and the
Commission Services)
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
Dissemination of this data must be always authorised by ACCIONA/EOAE, (it is
historic data, it is not produced for the project). Prior notice of any
planned publication shall be given to ACCIONA/EOAE at least 45 calendar days
before the publication. The use of Confidential Information for any other
purposes shall be considered a breach of this Agreement.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
No
</td> </tr>
<tr>
<td>
Personal data protection: are they personal data? If so, have you gained
(written) consent from data subjects to collect this information?
</td>
<td>
Yes, occasionally, when any operation is carried out by the concessionary
staff.
The consent will be managed when necessary.
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td> </tr>
<tr>
<td>
Data storage (including backup): Where? For how long?
</td>
<td>
ACCIONA/EOAE database, at least until the end of the concession project
</td> </tr> </table>
<table>
<tr>
<th>
DATASET NAME
</th>
<th>
Data on ACCIONA Smart Roads Managment Tool (legacy data management system)
</th> </tr>
<tr>
<td>
Data Identification
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
Any information shared through the legacy ACCIONA Smart Road Tool
</td> </tr>
<tr>
<td>
Source
</td>
<td>
ACCIONA control centres (legacy data management tool)
</td> </tr>
<tr>
<td>
Partners activities and responsibilities
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable)
</td>
<td>
ACCIONA
</td> </tr>
<tr>
<td>
Partner in charge of data collection
</td>
<td>
ACCIONA
</td> </tr>
<tr>
<td>
Partner in charge of data analysis
</td>
<td>
NTUA, IFS, C4C, FINT, AUTH, ADS, ITC
</td> </tr>
<tr>
<td>
Partner in charge of data storage
</td>
<td>
ACCIONA
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP2 (Task 2.5), WP3, WP4, WP5, WP6, WP7 (Task
7.5)
</td> </tr>
<tr>
<td>
Standards
</td> </tr>
<tr>
<td>
Info about metadata (production and
storage dates, places) and documentation?
</td>
<td>
PANOPTIS partners can access to all the data about the RI in the data
management system of ACCIONA (previously authorised by ACCIONA).
</td> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
Accessible online
</td> </tr>
<tr>
<td>
Data exploitation and sharing
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
Data used for vulnerability, risk and impact analysis, feeding all the models
(weather,
corrosion), image analysis of cameras
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level: confidential (only for members of
the
Consortium and the Commission
Services) or Public
</td>
<td>
Dissemination level: confidential (only for members of the Consortium and the
Commission Services)
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
Dissemination of this data must be always authorised by ACCIONA/EOAE, (it is
historic data, it is not produced for the project). Prior notice of any
planned publication shall be given to ACCIONA/EOAE at least 45 calendar days
before the publication. The use of Confidential Information for any other
purposes shall be considered a breach of this Agreement.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
No
</td> </tr>
<tr>
<td>
Personal data protection: are they personal data? If so, have you gained
(written) consent from data subjects to collect this information?
</td>
<td>
No
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td> </tr>
<tr>
<td>
Data storage (including backup): Where? For how long?
</td>
<td>
ACCIONA/database, at least until the end of the concession project
</td> </tr> </table>
<table>
<tr>
<th>
DATASET NAME
</th>
<th>
Land use and cover
</th> </tr>
<tr>
<td>
Data Identification
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
Land use and land cover maps
</td> </tr>
<tr>
<td>
Source
</td>
<td>
Open Access inventories of the Spanish
Administration: Ministry of Finance for land use
https://www.sedecatastro.gob.es/Accesos/SECAcc
DescargaDatos.aspx
SIOSE geoportal (Ministry of Public Works) and
CORINE Land Cover, for land cover data
</td> </tr>
<tr>
<td>
Partners activities and responsibilities
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable)
</td>
<td>
Open source data
</td> </tr>
<tr>
<td>
Partner in charge of data collection
</td>
<td>
ACCIONA
</td> </tr>
<tr>
<td>
Partner in charge of data analysis
</td>
<td>
AUTH, FMI
</td> </tr>
<tr>
<td>
Partner in charge of data storage
</td>
<td>
AUTH, FMI
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP2 (Task 2.4), WP3
</td> </tr>
<tr>
<td>
Standards
</td> </tr>
<tr>
<td>
Info about metadata (production and
storage dates, places) and documentation?
</td>
<td>
Data can be downloaded from download services of all the public agencies in
the three levels of Spanish administration, national, regional and local
</td> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
“.shp” or raster format like “.geotiff”
Various Mb
</td> </tr>
<tr>
<td>
Data exploitation and sharing
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
Feed for climatic and geo-hazards models
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level: confidential (only for members of
the
Consortium and the Commission
Services) or Public
</td>
<td>
Public
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
Open source inventory Can be published
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
no
</td> </tr>
<tr>
<td>
Personal data protection: are they personal data? If so, have you gained
(written) consent from data subjects to collect this information?
</td>
<td>
no
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td> </tr>
<tr>
<td>
Data storage (including backup): Where? For how long?
</td>
<td>
Data storage in PANOPTIS Open source repository for 4 years after the end of
the project.
</td> </tr> </table>
<table>
<tr>
<th>
DATASET NAME
</th>
<th>
Vegetation maps
</th> </tr>
<tr>
<td>
Data Identification
</td>
<td>
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
Vegetation maps of the areas surrounding the demosites
</td> </tr>
<tr>
<td>
Source
</td>
<td>
Open Access inventories of the Spanish Ministry of Environment
</td> </tr>
<tr>
<td>
Partners activities and responsibilities
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable)
</td>
<td>
Open source
</td> </tr>
<tr>
<td>
Partner in charge of data collection
</td>
<td>
AUTH, FMI
</td> </tr>
<tr>
<td>
Partner in charge of data analysis
</td>
<td>
AUTH, FMI
</td> </tr>
<tr>
<td>
Partner in charge of data storage
</td>
<td>
AUTH, FMI
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP3
</td> </tr>
<tr>
<td>
Standards
</td> </tr>
<tr>
<td>
Info about metadata (production and
storage dates, places) and documentation?
</td>
<td>
Data can be downloaded from download services of all the public agencies in
the three levels of Spanish administration, national, regional and local
</td> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
Vegatation maps in shape format
LiDAR x,y,z data (laz files ASCII files, ESRI matrix (.asc),
(various Mb)
</td> </tr>
<tr>
<td>
Data exploitation and sharing
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
Improve simulations of the climate related hazards on the road
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level: confidential (only for members of
the
Consortium and the Commission
Services) or Public
</td>
<td>
Public
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
Open source inventory Can be published
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
no
</td> </tr>
<tr>
<td>
Personal data protection: are they personal data? If so, have you gained
(written) consent from data subjects to collect this information?
</td>
<td>
no
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td> </tr>
<tr>
<td>
Data storage (including backup): Where? For how long?
</td>
<td>
Data storage in PANOPTIS Open source repository for 4 years after the end of
the project. Also in
National and Regional Open Source inventories.
</td> </tr> </table>
<table>
<tr>
<th>
DATASET NAME
</th>
<th>
Hydrological data
</th> </tr>
<tr>
<td>
Data Identification
</td>
<td>
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
Hydrological maps, rain precipitation historic, flood prone areas
</td> </tr>
<tr>
<td>
Source
</td>
<td>
Open Access inventories of the Spanish Ministry of Environment
</td> </tr>
<tr>
<td>
Partners activities and responsibilities
</td>
<td>
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable)
</td>
<td>
Open source data
</td> </tr>
<tr>
<td>
Partner in charge of data collection
</td>
<td>
ACCIONA
</td> </tr>
<tr>
<td>
Partner in charge of data analysis
</td>
<td>
AUTH, FMI
</td> </tr>
<tr>
<td>
Partner in charge of data storage
</td>
<td>
AUTH, FMI
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP2 (Task 2.4), WP3
</td> </tr>
<tr>
<td>
Standards
</td> </tr>
<tr>
<td>
Info about metadata (production and
storage dates, places) and documentation?
</td>
<td>
Data can be downloaded from download services of all the public agencies in
the three levels of Spanish administration, national, regional and local.
</td> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
“.shp”, arpsis
Various Mb
</td> </tr>
<tr>
<td>
Data exploitation and sharing
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
Feed for climatic and geo-hazards models
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level: confidential (only for members of
the
Consortium and the Commission
Services) or Public
</td>
<td>
Public
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
Open source inventory Can be published
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
no
</td> </tr>
<tr>
<td>
Personal data protection: are they personal data? If so, have you gained
(written) consent from data subjects to collect this information?
</td>
<td>
no
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td> </tr>
<tr>
<td>
Data storage (including backup): Where? For how long?
</td>
<td>
Data storage in PANOPTIS Open source repository for 4 years after the end of
the project. Also in National and Regional Open Source inventories.
</td> </tr> </table>
<table>
<tr>
<th>
DATASET NAME
</th>
<th>
Satellite data
</th> </tr>
<tr>
<td>
Data Identification
</td>
<td>
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
Imagery (several processing levels available) JPEG 2000, GEOTIFF (Spot 6/7);
Images, metadata, quality indicators, auxiliary data SENTINEL-SAFE (JPEG 2000,
.XML, .XML/GML) (Sentinel-2); Images, metadata, quality indicators, ground
control pointsb.GEOTIFF, .ODL, .QB, .GCP (Landsat
7 ETM +)
</td> </tr>
<tr>
<td>
Source
</td>
<td>
Spot 6/7, Sentinel-2, Landsat 7 ETM+
</td> </tr>
<tr>
<td>
Partners activities and responsibilities
</td>
<td>
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable)
</td>
<td>
ADS
</td> </tr>
<tr>
<td>
Partner in charge of data collection
</td>
<td>
ADS
</td> </tr>
<tr>
<td>
Partner in charge of data analysis
</td>
<td>
ADS, ITC
</td> </tr>
<tr>
<td>
Partner in charge of data storage
</td>
<td>
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP5
</td> </tr>
<tr>
<td>
Standards
</td>
<td>
</td> </tr>
<tr>
<td>
Info about metadata (production and
</td>
<td>
In ADS data bases
</td> </tr>
<tr>
<td>
storage dates, places) and documentation?
</td>
<td>
</td> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
The satellite images are constituted with pixels.
The size of the pixels depends on the instruments.
The images can be taken with various wavelengths (multi-spectral,
hyperspectral). For PANOPTIS, the number of satellite images will be limited
(due to the slow variation of the landscape and the cost of images). Expected
volume around 20 images.
</td> </tr>
<tr>
<td>
Data exploitation and sharing
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
Identify the changes in the landscape and in the RI to detect possible
problems (landslides, rockslides, flows, etc.)
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level: confidential (only for members of
the
Consortium and the Commission
Services) or Public
</td>
<td>
Public
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
The images are exploited and only the results of exploitation will be
distributed.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Personal data protection: are they personal data? If so, have you gained
(written) consent from data subjects to collect this information?
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td> </tr>
<tr>
<td>
Data storage (including backup): Where? For how long?
</td>
<td>
ADS data bases for 10 years.
</td> </tr> </table>
## Making data openly accessible
At this time of the project, we can make the hypotheses that the data will be
stored:
* In the project web site repository.
* At the end-user premises/maintenance systems, In the integration platform (system repository), At the partners premises.
Some of the data will be collected from external data bases (open) so as to
develop system capabilities. It is especially true for images of defects on RI
or images of weather/disasters effects on RI. These images will be used to
calibrate the detection/analysis algorithms as several modules will use deep-
learning techniques. So, the more images will be available, the more accurate
the results should be.
In the other way round, some data collected and processed in the project
should be made accessible to researchers outside the consortium so they can
use them for similar purposes. The WP leaders will therefore decide after the
trials which data should be made accessible from outside the consortium in
respect of the IPRs and of the data owners decisions.
The repository that will be used for the open data will be accessible through
the project website hosted by NTUA.
## Making data interoperable
PANOPTIS is dealing with data that describe an environment which is the same
all over Europe (and over the world). The Meteorological data are in general
standardised (WMO) but the interpretation that is done from them to produce
alerts can vary. The approach in PANOPTIS is to use as much as possible
existing standards and propose standardization efforts in the domain where the
standards are not widely used or not yet existing.
For the vulnerability of infrastructures, although not completely
standardized, there are very similar approaches in Europe to define an ID card
of infrastructure hot spots (bridges, tunnels). In AEROBI project, a bridge
taxonomy has been proposed as well as a bridge ontology that enables a
standardization of names and attributes. The taxonomy and the ontology of
bridges from AEROBI will be re-used in PANOPTIS.
For the Command and Control system/COP, the objects displayed in the situation
will be exchanged using pre-standardised or widely spread formats: XML
documents collection (NVG or TSO objects). Using these formats, the situation
elaborated in PANOPTIS can easily be exchanged with other parties having a
modern information system/control room/call centre (e.g. Civil Protection,
112, road police,etc.).
## Increase data re-use (through clarifying licences)
The data will start to be available when the first version of the system is
integrated and validated (From month 24).
From all the data collected and processed by the system, the data related to
the
Road Infrastructure can be confidential. They belong to the road operators
(respectively ACCIONA and Egnatia Odos), so if any third party outside the
consortium wants to use them, a case by case authorization is needed from the
operators.
The data should be accessible after the end of the project;
The web site of the project will be maintained one year after the project,
Academic and Research partners of the project will continue to use it after
the project.
# Allocation of resources
The costs for making data fair in PANOPTIS are related to Task 2.4, managed by
AUTH, with the support of FMI and the end-users (ACCIONA and Egnatia Odos).
The maintenance of these data after the project life-time will be decided
within this task after the system architecture (especially data models)
completion.
# Data security
The data security will be assured by:
The project data repository (controlled access); The partners secured
accesses to their data bases.
PANOPTIS data are not sensitive. The infrastructure data owners (ACCIONA and
Egnatia Odos) essentially want to control the use of their data and be sure
that they are not used in improper ways.
HRAP module will handle a big set of rules and procedures that will also be
used for operational decision support
# Ethical aspects
PANOPTIS data concern natural phenomena and road infrastructure. No part of
PANOPTIS system manipulates personal data.
However, during the tests, trials or dissemination events, pictures of persons
can be taken, either by the system sensors (fixed cameras, UAV cameras) or by
individual cameras to illustrate reports or to put in the project galleries.
In addition, persons from or outside the consortium can be interviewed.
Any time there will be a collection of personal data (images, CVs, etc.), the
persons will sign a consent form under which they accept the use of these data
in the context of the project and provided that the use cannot go beyond what
is specified in the consent form.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1335_STRATOFLY_769246.md
|
# 1\. Outline
This deliverable aims at providing details for “STRATOFLY Data Management
Plan”. It involves data management purposes and architecture, initial strategy
for data collection, storage, maintenance and publication. It also takes into
account includes information on how research data will be dealt with during
the project; what data will be collected, processed and/or generated, which
strategy and methodology will be used for data collecting and which data can
be open-access or restricted to the project partners.
# 2\. Data Summary
## 2.1 Purpose of the Data collection
The data collected by STRATOFLY is technical data of new high-speed civil
transportation concept required to parametrize multi-disciplinary simulation
model as well as socio-economical data. The main purposes of the data
collection are explained with three objectives given below:
**Main Objective I** : The STRATOFLY project will develop a new high-speed
civil transportation platform by taking advantage of that the scientific
groups from different disciplinaries work together. The multi-disciplinary
subjects in this project contains air traffic management, design of propulsion
system, flight mechanics/dynamics, thermal stress and load management,
investigation and modeling of structural/control dynamics and stresses,
studies on clime and noise impacts. The data collection, sharing, storage and
management are needed by these subjects to use and get benefit from each
other’s outputs.
**Main Objective II:** The development of high-speed civil transportation
platform requires the propulsion plant which should provide stable and
sufficient thrust along the flight trajectory. The design of the plant is
conducted both numerically and experimentally by different consortium
partners. The design performance mostly depends on the interaction between
these partners and how much efficient they can benefit each other’s data.
Thus, data management is very essential for their communication and the usage.
**Main Objective III:** STRATOFLY aims to change social, public and
environmental aspects about for high-speed civil transportation platform in
positive direction and produce significant technical data to turn this idea be
real by gathering innovative resources, feedbacks from public and
stakeholders’ opinions.
## 2.2 Types, Formats, and Utility of the collected Data
The type of data, formats of data, accessibility and list of partners who are
responsible to provide, are given in the Table.1:
## Table 1 Data type, format, standards, accessibility, storage type and
partner
<table>
<tr>
<th>
Type of Data
</th>
<th>
WP
</th>
<th>
Standards
</th>
<th>
Accessibility
</th>
<th>
Curation/ preservation
</th>
<th>
Partner
</th> </tr>
<tr>
<td>
Progress, interim and final reports
</td>
<td>
\-
</td>
<td>
STRATOFLY report templates
</td>
<td>
Restricted to the project partners and
</td>
<td>
Website’s intranet
(Alfresco)
Partners’
</td>
<td>
All
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
the EC
</th>
<th>
premises
</th>
<th>
</th> </tr>
<tr>
<td>
Raw model output data
</td>
<td>
\-
</td>
<td>
Model dependent
</td>
<td>
Restricted to the project partners and the EC
</td>
<td>
Website’s intranet
(Alfresco) Partners’ premises
</td>
<td>
All
</td> </tr>
<tr>
<td>
Meeting records
</td>
<td>
1
</td>
<td>
STRATOFLY MoM
template
</td>
<td>
Restricted to the project partners and the EC
</td>
<td>
Website’s intranet
(Alfresco) Partners’ premises
</td>
<td>
All
</td> </tr>
<tr>
<td>
1D reduced propulsion system
model
</td>
<td>
3.2
2.4
</td>
<td>
Model dependent
</td>
<td>
Restricted to the project partners
</td>
<td>
Partners’ premises
</td>
<td>
VKI, CIRA
</td> </tr>
<tr>
<td>
CFD database for the aero-propulsive system
</td>
<td>
3.3
3.4
</td>
<td>
Model dependent
</td>
<td>
Restricted to the project partners
</td>
<td>
Website’s intranet
(Alfresco) Partners’ premises
</td>
<td>
VKI, CIRA
</td> </tr>
<tr>
<td>
CFD database for the PAC parameters
</td>
<td>
3.4
</td>
<td>
Model dependent
</td>
<td>
Restricted to the project partners
</td>
<td>
Website’s intranet
(Alfresco) Partners’ premises
</td>
<td>
VKI, FOI
</td> </tr>
<tr>
<td>
CFD database for the combustor optimization
</td>
<td>
3.3
</td>
<td>
Model dependent
</td>
<td>
Restricted to the project partners
</td>
<td>
Website’s intranet
(Alfresco) Partners’ premises
</td>
<td>
CIRA, DLR-GO
</td> </tr>
<tr>
<td>
Thermal management database
</td>
<td>
2.2
3.5
</td>
<td>
Model dependent
</td>
<td>
Restricted to the project partners
</td>
<td>
Website’s intranet
(Alfresco) Partners’ premises
</td>
<td>
CIRA, POLITO
</td> </tr>
<tr>
<td>
Experimental database of noise investigation
</td>
<td>
4.2
</td>
<td>
Model dependent
</td>
<td>
Restricted to the project partners
</td>
<td>
Website’s intranet
(Alfresco) Partners’ premises
</td>
<td>
VKI, NLR
</td> </tr>
<tr>
<td>
Database for flight trajectory
</td>
<td>
3.5
4.4
</td>
<td>
Model dependent
</td>
<td>
Restricted to the project partners
</td>
<td>
Website’s intranet
(Alfresco) Partners’ premises
</td>
<td>
POLITO, CIRA, VKI, NLR, DLR, TUHH, CNRS
</td> </tr>
<tr>
<td>
Structural analysis (FEA) database
</td>
<td>
2.3
</td>
<td>
Model dependent
</td>
<td>
Restricted to the project partners
</td>
<td>
Partners’ premises
</td>
<td>
FICG
</td> </tr>
<tr>
<td>
Atmospheric composition model database
</td>
<td>
4.3
</td>
<td>
Model dependent
</td>
<td>
Restricted to the project partners
</td>
<td>
Partners’ premises
</td>
<td>
DLR-OP, CNRS, TUHH
</td> </tr>
<tr>
<td>
CAD data for aircraft airframe
</td>
<td>
2
</td>
<td>
CATIA
</td>
<td>
Restricted to the project
</td>
<td>
Partners’ premises
</td>
<td>
POLITO
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
partners
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Experimental database for PAC
</td>
<td>
4.1
</td>
<td>
Model dependent
</td>
<td>
Restricted to the project partners
</td>
<td>
Partners’ premises
</td>
<td>
ONERA
</td> </tr>
<tr>
<td>
Experimental database for NOX measurement
</td>
<td>
4.1
</td>
<td>
Model dependent
</td>
<td>
Restricted to the project partners
</td>
<td>
Partners’ premises
</td>
<td>
DLR-GO
</td> </tr> </table>
### 2.3 STRATOFLY data management methodology
The guide of data management in STRATOFLY project is given in Figure 1\. The
management consists of three aspects:
1. Architecture: It explains that the structure and function of data sharing and storing system.
2. Process: The process as shown in Figure 2, is for checking whether data contributed to STRATOFLY project can be usable and accessible or not.
3. Reporting: The monitoring the data archives can be used to inform the decision-making mechanism for future improvement in the Data Management planning.
As it is demonstrated in Figure 1, online data archiving and sharing are
provided via scientific outputs in journals and conference proceedings. The
open-public data can be accessible by using project website and the data
restricted to project partners can be available in project collaborative site
Alfresco. The partners can also store the data in their local repository such
as memories of clusters.
The data processing scheme contains the processing steps from raw state to the
end. Data considered to be used in project steps, comes as raw state and is
stored in local repositories. As a result of data analysis by the partners, it
is determined whether this data can be evaluated as important or not. If it is
seen as important, it can be shared in collaborative website Alfresco,
monitored in project database and also discussed during the project meetings.
When the data is thought as qualified to be published, it is presented and
stored in online journal repository and Alfresco.
# 3\. FAIR Data
STRATOFLY uses the Alfresco depository to store all data which are accessible
to all project partners. The Alfresco platform is a modern, enterprise-class,
cloud-native platform that provides the quick way for project partners to
interact with information and to share documents with each other in a storage
pool.
STRATOFLY also publicly share the data generated by the consortium. This
action aims at facilitating access and exploitation of scientific data by all
levels of the society. In parallel with the scientific publications (necessary
for the impact factor of the STRATOFLY publications), the consortium endeavors
to submit an amount of its results to the European Open Science Cloud (EOSC)
through its network of repositories. A majority of partners of this consortium
are already connected to the EOSC through the OpenAIRE network and will use
their institutional repository to that end. In parallel, the project will use
the CERN-based ZENODO open-access repository to upload i) all the public
deliverables generated by the consortium ii) Open publications, and iii)
selected subsets of the experimental and numerical databases.
## 3.1 Making data findable, including provisions for metadata
All outcomes (summary of meetings and presentations during the progress of
meetings) are uploaded in Alfresco server within 15 days following the project
meetings. The data is assigned with descriptive names keywords and metadata in
order to access it easily.
**Naming conventions:**
Data is named by using given naming conventions accordingly to its type:
Deliverables: D.x.y.z
24. number of work package
25. number of sub work package
26. number of deliverables
**Metadata:**
The metadata which is related with published dataset must be by default ✔
Digital Object Identifiers and version numbers ✔ Bibliographic information ✔
Keywords ✔ Abstract/information ✔ Associated project and communities ✔
Associated publications and reports ✔ Grant information
✔ Access and licensing information ✔ Language
## 3.2 Making data openly accessible
All scientific outputs are by default made publicly accessible conveniently
with the Green / Gold access regulations applied by each scientific publisher.
As much as possible the technical papers will be freely accessible through the
project website. Additionally, data can be available to access via journal and
conference repository. The project consortium also endeavors to provide a
significant amount of the results to the European Open Science Cloud (EOSC)
via the network of repositories.
## 3.3 Making data interoperable
Scientific notations as S.I. units and vocabulary as in ISO test standards are
used in data files. The Alfresco and Zenodo here the data and metadata are
shared and stored uses popular formats such as BibTex, CSL, DataCite, ASCII
and export to Mendeley.
## 3.4 Increase data re-use
The data published in scientific journals are made available as Open Access is
granted for the paper or preprint. Open data will be reusable as defined by
the license conditions. However, data restricted to the project partners and
European commission is defined as confidential and will not be reusable as
default to avoid commercial exploitation.
# 4\. Allocation of Resources
The resources allocated for the management involves project management man
hours of the deputy coordinator. von Karman institute will be responsible for
the management of data generated by STRATOFLY consortium in coordination with
Politecnico di Torino. Dr. Riamando Giamanco (VKI) will be involved on the
information technologies resources allocation for the external services
(STRATOFLY website domain provider). The file exchange server (Alfresco) for
the consortium will be provided, managed and maintained by the deputy
coordinator. The data storage of Alfresco server will be provided by IT
services of the von Karman
Institute.
von Karman Institute will provide updating the project related databases i.e.
Zenodo, monitoring database in STRATOFLY Alfresco server and the continuous
reporting feature in the EC’s participant portal in coordination with
Politecnico di Torino.
**Gold Data stored on Zenodo depository:** There will be no long-term
archiving costs for the project because it is already financed depository used
and externally freely usable. The storage time of data curation is only
limited by the lifetime of the Zenodo depository
**Self-archiving or so-called 'green' Open Access** : It will be also applied
through the developed Zenodo repository. As also required, open access to the
publication will be ensured in a maximal delay of 6 months. The difference
between gold and green open access (including related fees) can be found in
the table below.
<table>
<tr>
<th>
</th>
<th>
Gold open access
</th>
<th>
Green open access
</th> </tr>
<tr>
<td>
Definition
</td>
<td>
Open access publishing (also called 'Gold' open access) means that an article
is immediately provided in open access mode by the scientific publisher. The
associated costs are shifted away from readers, and instead to (for example)
the university or research institute to which the researcher is affiliated, or
to the funding agency supporting the research.
</td>
<td>
Self-archiving (also called 'Green' open access) means that the
published article or the final peerreviewed manuscript is archived by the
researcher – or a representative - in an online repository before, after or
alongside its publication. Access to the article is often – but not
necessarily - delayed (‘embargo period’) as some scientific publishers may
wish to recoup their investment by selling subscriptions and charging payper-
download: view fees during an exclusivity period.
</td> </tr>
<tr>
<td>
Options
</td>
<td>
* Publish in an open access journal
* Or in a journal which supports open access
</td>
<td>
* Link to the article
* Select a journal that features an open archive
* Self-archive a version of the
article
</td> </tr>
<tr>
<td>
Access
</td>
<td>
* Public access is to the final published article
* Free access to a version of the article
</td>
<td>
* Access is immediate
* Time delay may apply (embargo period)
</td> </tr>
<tr>
<td>
Fees
</td>
<td>
* Open access fee is paid by the author
* Fees range between $500 and $5,000 USD depending on the journal
</td>
<td>
✔ No fee is payable by the author as publishing costs are covered by library
subscriptions
</td> </tr>
<tr>
<td>
Use
</td>
<td>
✔ Authors can choose between a commercial & noncommercial
user license
</td>
<td>
✔ Accepted manuscripts should attach a Creative Common
License ✔ Authors retain the right to reuse their articles for a wide range of
purposes
</td> </tr> </table>
# 5\. Data Security
The data security is as specified by the Zenodo depository
1. **Versions:** Data files are versioned while meeting records are not versioned. The uploaded data is archived as Submission Information Package.
Derivatives of data files are generated, but original content is never
modified. Records can be retracted from public view; however, the data files
and record are preserved.
2. **Replicas:** All data files are stored in CERN Data Centres, primarily Geneva, with replicas in Budapest. Data files are kept in multiple replicas in a distributed file system, which is backed up to tape on a nightly basis.
3. **Retention period:** Items will be retained for the lifetime of the repository. This is currently the lifetime of the host laboratory CERN, which currently has an experimental program defined for the next 20 years at least.
4. **Functional preservation** : Zenodo makes no promises of usability and understandability of deposited objects over time.
5. **File preservation:** Data files and metadata are backed up nightly and replicated into multiple copies in the online system.
6. **Fixity and authenticity:** All data files are stored along with a MD5 checksum of the file content. Files are regularly checked against their checksums to assure that file content remains constant.
7. **Succession plans:** In case of closure of the repository, best efforts will be made to integrate all content into suitable alternative institutional and/or subject based repositories.
# 6\. Ethical Aspects
No sensitive personal data will be collected (see Grant Agreement, Annex 1
(part B)).
# 7\. Other Procedures
No other national, funder, sectorial, or departmental procedures for data
management are planned.
## 8\. Conclusions
This document provides the nature of the data management, in the way it is
collected, preserved and shared, and explains principles of data management
procedures for STRATOFLY project in detail. The classification of the data
related to STRATOFLY project is also outlined. The preservation and protection
of the proprietary data possessed by STRATOFLY consortium is detailed.
Moreover, determination and sharing of the open access data by the STRATOFLY
consortium will be made through various platforms. The gold open access data
will be disseminated by high impact factor journals while an extract of green
data will be generated from the experimental and numerical data obtained
during the execution of the project by the coordinator and the partner who is
the source of the information. One of the relevant platforms for open access
data sharing designated as European commissions is the Zenodo platform. The
basic principles of Zenodo platform and FAIR data policy are also provided in
this deliverable. The resources allocation of STRATOFLY data management
through the execution of the project and beyond is also summarized.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1336_SAFEWAY_769255.md
|
# Executive Summary
This document describes the second version of the Data Management Plan (DMP)
for the SAFEWAY project. The DMP provides an analysis of the main elements of
the data management policy that will be used throughout the SAFEWAY project by
the project partners, with regard to all datasets that will be generated by
the project. The documentation of this plan is a precursor to the WP1
Management. The format of the plan follows the Horizon 2020 template
“Guidelines on Data Management in Horizon 2020” 1 .
## **2\. General Principles**
**2.1 Pilot on Open Research Data**
The SAFEWAY Project is fully aware of the open access to scientific
publications article (Article 29.2 of the H2020 Grant Agreement), as well as
to the open access to research data article (Article 29.3 of the H2020 Grant
Agreement). However, project partners have opted to be out of the Open
Research Data due to a possible conflict with protecting results; SAFEWAY
results will be close to market and results’ disclosures should be taken with
care and always considering exploitation/commercialization possibilities.
**2.2 IPR management and security**
The SAFEWAY project strategy for knowledge management and protection considers
a complete range of elements leading to the optimal visibility of the project
and its results, increasing the likelihood of market uptake of the provided
solution and ensuring a smooth handling of the individual intellectual
property rights of the involved partners in view or paving the way to
knowledge transfer:
IPR protection and IPR strategy activities will be managed by Laura TORDERA
from FERROVIAL (leader of WP10) as Innovation and Exploitation Manager with
the support of the H2020 IPR Helpdesk. The overall IPR strategy of the project
is to ensure that partners are free to benefit from their complementarities
and to fully exploit their market position. Hence, the project has a policy of
patenting where possible. An IPR Plan will be included in the Exploitation &
Business Plans (D10.4).
Regarding Background IP (tangible and intangible input held by each partner
prior to the project needed to the execution of the project and/or exploiting
the results) it will be detailed in the Consortium Agreement, defining any
royalty payments necessary for access to this IP. Regarding Foreground IP
(results generated under the project) they will belong to the partner who has
generated them. Each partner will take appropriate measures to properly manage
ownership issues. When several beneficiaries had jointly carried out
generating results and where their respective share of work cannot be
ascertained, they will have joint ownership of such results. They will
stablish an agreement regarding the allocation of terms of exercising the
joint ownership, including definition of the conditions for grating licenses
to third parties.
**2.3 Allocation of resources**
The Project Technical Committee (PTC) will be responsible of collecting the
knowledge generated and defining protection strategy and the necessary access
rights for results exploitation, as well as propose fair solutions to any
possible conflict related to IPR. Complementarily, the PTC through the
Exploitation & Innovation Manager (E&IM) will keep a permanent surveillance
activity on the blocking IP or new IP generated elsewhere in the EU landscape
to ensure SAFEWAY freedom to operate. The output of this activity will be
included in the Exploitation and Business Plan (E&BP), which will be updated
during the project time frame.
**2.4 Personal data protection**
For some of the activities to be carried out by the project, it may be
necessary to collect basic personal data (e.g. full name, contact details,
background), even though the project will avoid collecting such data unless
deemed necessary.
Such data will be protected in compliance with the EU's General Data
Protection Regulation, Regulation (EU) 2016/679. National legislations
applicable to the project will also be strictly followed.
All data collected by the project will be done after giving data subjects full
details on the experiments to be conducted, and after obtaining signed
informed consent forms. Such forms, provided in the previous deliverable D11.2
POPD – Requirement No 2, are also included in Appendix 1 of this document.
Additionally, the overall information about procedures for data collection,
processing, storage, retention and destruction were also provided in D11.2,
which are annexed to the present DMP in Appendix 2.
**2.5 Data security**
SAFEWAY shall take the following technical and organizational security
measures to protect personal data:
1. Organizational management and dedicated staff responsible for the development, implementation, and maintenance of SAFEWAY’s information security program.
2. Audit and risk assessment procedures for the purposes of periodic review, monitoring and maintaining compliance with SAFEWAY policies and procedures, and reporting the condition of its information security and compliance to senior internal management.
3. Maintain Information security policies and make sure that policies and measures are regularly reviewed and where necessary, improve them.
4. Password controls designed to manage and control password strength, and usage including prohibiting users from sharing passwords.
5. Security and communication protocols, following Big Data analytics, will be developed as required. SAFEWAY solutions will anticipate security not only technically, but also regarding Data Protection Regulation 2016/679 changes in the Data Protection Regime as of May 2018. It is also recommended to consider any other security measures in an institutional, national or international way, like other guidelines or regulations – such as ISO 27002 and ISO 15713.
6. SAFEWAY solutions will not centralise all the native data in a common database, but instead will retrieve data with values for the platform functionalities on demand. The services layer of the platform includes communication application proceeding information disclosure.
7. Operational procedures and controls to provide for configuration, monitoring, and maintenance of technology and information systems according to prescribed internal and adopted industry standards, including secure disposal of systems and media to render all information or data contained therein as undecipherable or unrecoverable prior to final disposal.
8. Change management procedures and tracking mechanisms designed to test, approve and monitor all changes to SAFEWAY technology and information assets.
9. Incident management procedures designed to investigate, respond to, mitigate and notify of events related to SAFEWAY technology and information assets, data protection incidents and procedure of notification to the authorities included.
10.Vulnerability assessment, patch management, and threat protection
technologies and scheduled monitoring procedures designed to identify, assess,
mitigate and protect against identified security threats, viruses and other
malicious code.
11.Data could wherever be processed in anonymised or pseudo-anonymised form.
12.Data will be processed ONLY if it is really adequate, relevant and limited
to what is necessary for the research (‘data minimisation principle’).
1. Personal data will be adequate, relevant and limited to what is necessary in relation to the purposes for which they are processed.
2. The minimum amount of personal data necessary to fulfil the purpose of SAFEWAY will be identified.
3. No more personal data than necessary for the purpose of SAFEWAY will be achieved and stored.
4. Whenever it is necessary to process certain particular information about certain individuals, it will be collected only for those individuals.
5. Personal data will not be collected if it could be useful in the future.
These guidelines will be of special application for INNOVACTORY and TØI
CENTRE, the two project partners who have a more intensive role in the use of
personal data. In the Deliverable D11.1-Ethics Requirements are annexed the
exact treatment of the data made by these two entities.
**2.6 Ethical aspects**
An ethical approach will be adopted and maintained throughout the fieldwork
process. The Ethics Mentor (appointed in D11.3 GEN Requirement 3) will assure
that the EU standards regarding ethics and Data Management are fulfilled. Each
partner will proceed with the survey according to the provisions of the
national legislation that are adjusted according to the respective EU
Directives for Data Management and ethics.
The consortium will ensure the participants’ right to privacy and
confidentiality of data in the surveys, by providing participants to the
survey with the Informed Consent Procedures:
\- for those participating in the surveys being carried out within Task 4.3,
by the Institute of Transport Economics-Norwegian Center for Transport
Research
These documents will be sent electronically and will provide information about
how the answers will be used and what is the purpose of the survey.
Participants will be assured that their answers, or personal data, will be
used only for the purposes of the specific survey. The voluntary character of
participation will be stated explicitly in the Consent Form.
As it is established in Deliverable D11.3, an Ethics Mentor is appointed to
advise the project participants on ethics issues relevant to protection of
personal data.
The Ethics Mentor will advise and supervise the following aspects of the
Project:
* _Data protection by design and default_ . The Project will require data control to implement appropriate technical and organisational measures to give effect to the GDPR’s core data-protection principles.
* _Informed consent to data processing_ . Whenever any personal data is collected directly from research participants, their informed consent will be sought by means of a procedure that meets the standards of the GDPR.
* _Use of previously collected data (‘secondary use’)_ . If personal data is processed in the Project without the express consent of the data subjects, it will be explained how those data are obtained, and their use in the Project will be justified.
* _Data protection processors._ If there is any Data Processor who works with information that belongs to the responsible, a Data Processor agreement shall be formalised.
* _Data protection impact assessments (DPIA)_ . If the Project involves operations likely to result in a high risk to the rights and freedoms of natural persons, this document will be conducted.
* _Profiling, tracking, surveillance, automated decision-making and big data_ . If the Project involves these techniques, a detailed analysis will be provided of the ethics issues raised by this methodology. It will comprise an overview of all planned data collection and processing operations; identification and analysis of the ethics issues that these raise, and an explanation of how these issues will be addressed to mitigate them in practice.
* _Data security_ . Both ethical and legal measures will be conducted to ensure that participants’ information is properly protected. These may include the pseudo-anonymization and encryption of personal data, as well as policies and procedures to ensure the confidentiality, integrity, availability and resilience of processing systems
* _Deletion and archiving of data_ . Finally, the collected personal data will be kept only as long as it is necessary for the purposes for which they were collected, or in accordance with the established auditing, archiving or retention provisions for the Project. These must be explained to your research participants in accordance with informed consent procedures.
* _Best practices Code._ It is recommended to implement a code where data protection best practices are informed and fully explained. This document may be personalised depending on type of employee or partner is working with this information. Also may be described if employee, freelance or partner can work with personal devices or not (Bring your own device Policies) when working with this information.
## **3\. Data Set Description**
SAFEWAY is committed to adopt whenever possible the FAIR principles for
research data; this is, data should be findable, accessible, interoperable and
re-usable.
SAFEWAY partners have identified the datasets that will be produced during the
different phases of the project. The list is provided below, while the nature
and details for each dataset are given in Section 4\.
This list is indicative and allowing an estimation of the data that SAFEWAY
will produce – it may be adapted (addition/removal of datasets) in the next
versions of the DMP to take into consideration the project developments.
**Table 2:** SAFEWAY Dataset overview
<table>
<tr>
<th>
**No**
</th>
<th>
**Dataset name**
</th>
<th>
**Responsible partner**
</th>
<th>
**Related Task**
</th> </tr>
<tr>
<td>
1
</td>
<td>
Mobile Mapping System (MMS) data
</td>
<td>
UVIGO
</td>
<td>
T3.2
</td> </tr>
<tr>
<td>
2
</td>
<td>
Historic weather dataset
</td>
<td>
UVIGO
</td>
<td>
T3.1 & T3.3
</td> </tr>
<tr>
<td>
3
</td>
<td>
Global Forecast System (GFS) data
</td>
<td>
UVIGO
</td>
<td>
T3.1 & T3.3
</td> </tr>
<tr>
<td>
4
</td>
<td>
Satellite data
</td>
<td>
PNK
</td>
<td>
T3.2
</td> </tr>
<tr>
<td>
5
</td>
<td>
Experts interviews
</td>
<td>
TØI
</td>
<td>
T4.3
</td> </tr>
<tr>
<td>
6
</td>
<td>
Data on risk tolerance
</td>
<td>
TØI
</td>
<td>
T4.3
</td> </tr>
<tr>
<td>
7
</td>
<td>
Sociotechnical system analysis
</td>
<td>
TØI
</td>
<td>
T4.3
</td> </tr>
<tr>
<td>
8
</td>
<td>
Infrastructure assets data
</td>
<td>
UMINHO
</td>
<td>
T5.1
</td> </tr>
<tr>
<td>
9
</td>
<td>
Information on the value system
</td>
<td>
IMC
</td>
<td>
T6.1
</td> </tr>
<tr>
<td>
10
</td>
<td>
Stakeholder contacts collection
</td>
<td>
UVIGO
</td>
<td>
WP10
</td> </tr>
<tr>
<td>
11
</td>
<td>
Dissemination events data
</td>
<td>
UVIGO
</td>
<td>
T10.3
</td> </tr>
<tr>
<td>
12
</td>
<td>
Stakeholder feedback
</td>
<td>
UVIGO
</td>
<td>
WP10
</td> </tr>
<tr>
<td>
13
</td>
<td>
Crowd-sourced data
</td>
<td>
INNOVACTORY
</td>
<td>
T4.1 & T5.1
</td> </tr> </table>
**Table 3:** Datasets description and purpose
<table>
<tr>
<th>
**No**
</th>
<th>
**Dataset name**
</th>
<th>
**Description**
</th>
<th>
**Purpose**
</th>
<th>
**Legitimation**
</th> </tr>
<tr>
<td>
1
</td>
<td>
MMS data
</td>
<td>
Data from the different sensors equipped in the Mobile Mapping System (MMS)
employed for the monitoring of the infrastructures, including data from some
or all the following sources: LiDAR sensors, RGB cameras, thermographic
cameras, and Ground Penetrating Radar.
</td>
<td>
Inspection of the infrastructure critical assets to quantify condition. From
this data, the input information for predictive models (WP5) and
SAFEWAY IMS
(WP7) will be extracted.
</td>
<td>
No personal data collected. No needed any informed consent or accept any
policies in this sense.
</td> </tr>
<tr>
<td>
2
</td>
<td>
Historic weather dataset
</td>
<td>
Observational quantitative meteorological data measured with hourly (or less)
temporal frequency over the Instituto
Português do Mar e da Atmosfera (IPMA) weather stations network. Relevant
variables are air temperature, atmospheric pressure, wind speed and direction,
maximum wind gusts speed and direction, relative air humidity, instant rain
and solar radiation.
</td>
<td>
Main source of observational info for meteorological data interpolation and
short-term prediction systems. Base dataset for meteorological activities on
WP3.
</td>
<td>
No personal data collected. No needed any informed consent or accept any
policies in this sense.
</td> </tr>
<tr>
<td>
3
</td>
<td>
Global
Forecast System (GFS) data
</td>
<td>
Predictive quantitative meteorological data calculated with hourly temporal
frequency over a planetary-wide ~11 km horizontal spatial resolution by the
National
Oceanic and Atmospheric
Administration Global Forecast System (GFS) numerical model. Relevant
variables are those most analogous to the Historic weather dataset ones.
</td>
<td>
Complementary source of observational info for meteorological data
interpolation and short-term prediction systems. Used on the same way than the
Historic weather dataset.
</td>
<td>
No personal data collected. No needed any informed consent or accept any
policies in this sense.
</td> </tr> </table>
<table>
<tr>
<th>
**No**
</th>
<th>
**Dataset name**
</th>
<th>
**Description**
</th>
<th>
**Purpose**
</th>
<th>
**Legitimation**
</th> </tr>
<tr>
<td>
4
</td>
<td>
Satellite data
</td>
<td>
Sentinel-1 satellite imagery from Copernicus Open Access Hub, to optimize the
Rethicus® displacement service based on MTInSAR algorithms.
</td>
<td>
Geospatial information acquired from satellite are key to detect and quantify
terrain displacement and deformation (e.g. landslides, subsidence, etc.)
</td>
<td>
No personal data collected. No needed any informed consent or accept any
policies in this sense.
</td> </tr>
<tr>
<td>
5
</td>
<td>
Experts interviews
</td>
<td>
The data contain transcriptions and notes from expert interviews with
researchers and policy makers. They will be either conducted personally, on
the phone (or skype) or they can also be conducted in written form. Include
findings from completed/ongoing EU projects
</td>
<td>
The aim is to identify and collect sources of knowledge on how the different
users think/act in extreme situations, as well as their level of preparedness
and risk tolerance, and identify case studies for analysis of risk tolerance
</td>
<td>
No personal data collected. No needed any informed consent or accept any
policies in this sense.
</td> </tr>
<tr>
<td>
6
</td>
<td>
Data on risk tolerance
</td>
<td>
This includes the evaluation of risk tolerance of different actors and
scheduling for use in focus groups, and follow-up surveys with
different user representatives.
</td>
<td>
To make findings on varying levels of risk tolerance and preparedness for a
range of short- and long-term extreme events, among the user groups
</td>
<td>
No personal data collected. No needed any informed consent or accept any
policies in this sense.
</td> </tr>
<tr>
<td>
7
</td>
<td>
Sociotechnical system analysis
</td>
<td>
Selected cases will be documented to represent a range of event types
occurring in Europe. Interviews and template analysis will be conducted with
people both managing and caught up in the extreme events studied.
</td>
<td>
These analyses along with established sociotechnical system principles will
inform on optimal social and technical arrangements for IMS.
</td>
<td>
No personal data collected. No needed any informed consent or accept any
policies in this sense.
</td> </tr> </table>
<table>
<tr>
<th>
**No**
</th>
<th>
**Dataset name**
</th>
<th>
**Description**
</th>
<th>
**Purpose**
</th>
<th>
**Legitimation**
</th> </tr>
<tr>
<td>
8
</td>
<td>
Infrastructure assets data
</td>
<td>
Database of infrastructures with identification, conservation state,
inspections and structural
detailing
</td>
<td>
Databased needed to define the input data to the development of predictive
models.
</td>
<td>
No personal data collected. No needed any informed consent or accept any
policies in this sense.
</td> </tr>
<tr>
<td>
9
</td>
<td>
Information on the value system
</td>
<td>
The information on the value systems, decision making processes and key
performance indicators that transportation infrastructure agencies and
stakeholders within the project use in management of their assets.
</td>
<td>
The monetized direct and indirect consequences of inadequate infrastructure
performance is needed as input to develop the value system that will allow to
prioritize the intervention of stakeholders related to transport
infrastructure.
</td>
<td>
Data is collected by the informed consent – following the model from appendix
1 where the project obtains the legitimation to process this data
</td> </tr>
<tr>
<td>
10
</td>
<td>
Stakeholder contacts collection
</td>
<td>
The data contain information on the main stakeholders of SAFEWAY along the
major stakeholder groups. They include infrastructure managers, operators,
public administrations, researchers, practitioners, policy makers. The contact
information that is collected includes the name, institutional affiliation,
position, email address, phone number and office address.
</td>
<td>
The collection will be used for contacting the respondents for the validation
of the project outcomes. It also provides the basis for the dissemination of
the project and for promoting the SAFEWAY IT solutions.
</td>
<td>
Data is collected by the informed consent – following the model from appendix
1 where the project obtains the legitimation to process this data
</td> </tr>
<tr>
<td>
**No**
</td>
<td>
**Dataset name**
</td>
<td>
**Description**
</td>
<td>
**Purpose**
</td>
<td>
**Legitimation**
</td> </tr>
<tr>
<td>
11
</td>
<td>
Workshops data
</td>
<td>
The data contain protocols, written notes and summaries that were done at the
three workshops, which are organized in different countries. The workshops aim
at developers and providers of technical solutions.
This dataset also includes the collection of contact information of attendees
that includes the name, institutional affiliation, position, email address,
phone number and office address.
</td>
<td>
The information gathered at the workshops will support the development of the
SAFEWAY methodologies and tools.
</td>
<td>
Data is collected by the informed consent – following the model from appendix
1 where the project obtains the legitimation to process this data
</td> </tr>
<tr>
<td>
12
</td>
<td>
Stakeholders feedback
</td>
<td>
Dataset containing responses from key stakeholders and Advisory Board members
to different technical feedback surveys that will be produced during the
project to gather feedback about the technical implementation of the project.
</td>
<td>
The information gathered though the surveys will support the development of
the SAFEWAY
methodologies and tools,
The information will also contribute to quantify SAFEWAY’s impact.
</td>
<td>
Data is collected by the informed consent – following the model from appendix
1 where the project obtains the legitimation to process this data
</td> </tr>
<tr>
<td>
13
</td>
<td>
Crowdsourced data
</td>
<td>
Flow and incident data from TomTom. TomTom
will collect this data by merging multiple data
sources, including anonymized measurement data from over 550 million GPS-
enabled devices.
</td>
<td>
Databased needed to define the input data to the development of predictive
models.
</td>
<td>
No personal data received from TomTom, No needed any informed consent or
accept any policies in this sense.
</td> </tr> </table>
## **4\. SAFEWAY Datasets**
**4.1 Dataset No 1: MMS data**
<table>
<tr>
<th>
**Mobile Mapping System (MMS) data**
</th> </tr>
<tr>
<td>
**Data identification**
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
This dataset comprises all the data collected by the mapping technologies
proposed by UVIGO in WP3. Therefore, it contains data from the different
sensors equipped in the Mobile Mapping System (MMS) employed for the
monitoring of the infrastructures, including data from some or all the
following sources: LiDAR sensors,
RGB cameras, thermographic cameras, and Ground Penetrating Radar. Data from
different LiDAR sensors (Terrestrial or Aerial) that may be employed for the
fulfilment of the different monitoring tasks will be comprised in this dataset
as well.
</td> </tr>
<tr>
<td>
Source
</td>
<td>
Sensor data gathered from the Mobile
Mapping System (MMS) owned by UVIGO.
</td> </tr>
<tr>
<td>
**Partners activities and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable)
</td>
<td>
UVIGO; N/A
</td> </tr>
<tr>
<td>
Partner in charge of the data collection
</td>
<td>
UVIGO
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis
</td>
<td>
UVIGO
</td> </tr>
<tr>
<td>
Partner in charge of the data storage
</td>
<td>
UVIGO
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP3:
-Task 3.1 (Data acquisition).
-Task 3.2 (Data pre-processing).
-Task 3.3 (Data processing and automation of monitoring)
</td> </tr> </table>
<table>
<tr>
<th>
**Mobile Mapping System (MMS) data**
</th> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (production and storage dates, places) and documentation?
</td>
<td>
Point cloud data from LiDAR sensors will be produced in real time when the
monitoring of the infrastructures is carried out. The metadata of that
information, stored in ‘.las’ format, has its documentation in
_http://www.asprs.org/wpcontent/uploads/2019/03/LAS_1_4_r14.pdf_ Imagery will
be produced together with the point cloud data, and the metadata will have the
specifications of the correspondent image file format.
</td> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
Data recorded from the different sensors of the MMS dataset will be stored in
standard formats:
* Point cloud data obtained from the LiDAR sensors will be stored either in standard binarized format (.las) or (less likely) as plain text (.txt).
* Imagery will be stored in standard image file formats (.jpg, .tiff…)
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
The recorded data will be used for the monitoring of the infrastructures
within the case studies of the project. The raw data acquired by the set of
sensors equipped in the monitoring system will be processed to extract
meaningful information about the infrastructure that can feed different
attributes of the Infrastructure Information Model that is being developed in
Task 3.3, and also for three-dimensional visualization of the monitored
infrastructure.
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level: confidential (only for members of
the Consortium and the Commission Services) or Public
</td>
<td>
Only the partner in charge of the data collection will have access to the raw
data of the dataset. The results of the data processing tasks (mainly
attribute fields required by the Infrastructure Information Model) will be
shared with other members as they will be integrated into the SAFEWAY
database. Any relevant threedimensional visualization of the data could be
made public for presenting final results.
</td> </tr>
<tr>
<td>
**Mobile Mapping System (MMS) data**
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
Data sharing and re-use at the end of the project will be subjected to the
permission of the infrastructure owners. Nevertheless, data will be available
for research purposes (development of future data processing algorithms)
provided that datasets are fully anonymized in such a way they cannot be
associated to real structures.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Personal data protection: are there personal data? If so, have you gained
(written) consent from data subjects to collect this information?
</td>
<td>
Data collected from this dataset will not intentionally include any personal
data. In the event of an identifiable individual within the imagery part of
the dataset, these data will be pre-processed to ensure that it is anonymised
or pseudo-anonymised. Particularly, the imagery is being preprocessed before
being available for the project. The pre-processing consists of applying an
algorithm that detects and blurs faces and car plates to images before being
available for further analysis.
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup): Where? For how long?
</td>
<td>
Data will be stored in secured servers of the partner in charge of the
dataset, where only research members will be granted access to the information
within the dataset.
The Consortium will take into account that for the purposes of the SAFEWAY
project the retention period is the one used in the relevant field, by analogy
to the administrative and financial issues this will be 5 years.
</td> </tr>
<tr>
<td>
Data destruction. How is data destruction handled? Compliance with EU /
national legislation.
</td>
<td>
Data destruction will be carried out following the partner’s guidelines on
data destruction, which will always comply with EU and national legislation,
and international standards such as ISO 27002:2017. In this particular case
the Spanish national public administration’s guidelines on electronic dada
destruction will be considered.
</td> </tr> </table>
**4.2 Dataset No 2: Historic weather dataset**
<table>
<tr>
<th>
**Historic weather dataset**
</th> </tr>
<tr>
<td>
**Data identification**
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
IPMA’s Portugal Weather Dataset.
</td> </tr>
<tr>
<td>
Source
</td>
<td>
Instituto Português do Mar e da Atmosfera.
Web: _http://www.ipma.pt/pt/index.html_
</td> </tr>
<tr>
<td>
**Partners activities and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable)
</td>
<td>
IPMA.
</td> </tr>
<tr>
<td>
Partner in charge of the data collection
</td>
<td>
IP.
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis
</td>
<td>
UVIGO.
</td> </tr>
<tr>
<td>
Partner in charge of the data storage
</td>
<td>
UVIGO.
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP3, tasks 3.1, 3.3.
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (production and storage dates, places) and documentation?
</td>
<td>
Observation weather data is continuously generated by the automated
meteorological stations belonging to the IPMA’s network with a 1 hour (or 10
minutes) frequency. IPMA will provide a subset of such data, limited to the
requested variables, for the considered stations and timespan.
</td> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
JSON, XML or SQL formats for storing meteorological data. Hour-interval
numeric values for each of the 9 required meteorological variables (air
temperature, atmospheric pressure, wind speed and direction, maximum wind
gusts speed and direction, relative air humidity, instant rain and solar
radiation), for each of the provided observation weather stations (number
between 30 and 100), during the Portuguese meteorological case study time
lapse.
</td> </tr>
<tr>
<td>
**Historic weather dataset**
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
Input for interpolation and short-term prediction algorithms used in WP3.
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level: confidential (only for members of
the Consortium and the Commission Services) or Public
</td>
<td>
Confidential.
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
Collected data will potentially be used in future scientific research papers.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Personal data protection: are there personal data? If so, have you gained
(written) consent from data subjects to collect this information?
</td>
<td>
No personal data.
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup): Where? For how long?
</td>
<td>
Data will be permanently stored in UVIGO computer facilities for the duration
of the SAFEWAY project.
</td> </tr>
<tr>
<td>
Data destruction. How is data destruction handled? Compliance with EU /
national legislation.
</td>
<td>
Data will be stored indefinitely, with no planned destruction.
</td> </tr> </table>
**4.3 Dataset No 3: GFS data**
<table>
<tr>
<th>
**Global Forecast System (GFS) data**
</th> </tr>
<tr>
<td>
**Data identification**
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
GFS Portugal Weather Dataset.
</td> </tr>
<tr>
<td>
Source
</td>
<td>
National Oceanic and Atmospheric Administration's Global Forecast System
weather forecast model.
Web: _https://www.ncdc.noaa.gov/dataaccess/model-data/model-
datasets/globalforcast-system-gfs_
</td> </tr>
<tr>
<td>
**Partners activities and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable)
</td>
<td>
NOAA.
</td> </tr>
<tr>
<td>
Partner in charge of the data collection
</td>
<td>
UVIGO.
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis
</td>
<td>
UVIGO.
</td> </tr>
<tr>
<td>
Partner in charge of the data storage
</td>
<td>
UVIGO.
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP3, tasks 3.1, 3.3.
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (production and storage dates, places) and documentation?
</td>
<td>
Forecast weather data is generated during the 4 cycle daily executions of the
GFS model, with an hourly temporal resolution, for a global grid with ~11 km
horizontal spatial resolution. UVIGO will gather a subset of such data,
limited to the requested variables, for the considered geographic area and
timespan.
</td> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
SQL formats for storing meteorological data. Hour-interval numeric values for
each of the 9 required meteorological variables (air temperature, atmospheric
pressure, wind speed and direction, maximum wind gusts speed and direction,
relative air humidity, instant rain and solar radiation), for each of the
considered grid points (number 1000-2000) during the Portuguese meteorological
case study time lapse.
</td> </tr>
<tr>
<td>
**Global Forecast System (GFS) data**
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
Input for interpolation and short-term prediction algorithms used in WP3.
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level: confidential (only for members of
the Consortium and the Commission Services) or Public
</td>
<td>
Confidential.
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
Collected data will potentially be used in future scientific research papers.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Personal data protection: are there personal data? If so, have you gained
(written) consent from data subjects to collect this information?
</td>
<td>
No personal data.
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup): Where? For how long?
</td>
<td>
Data will be permanently stored in UVIGO computer facilities for the duration
of the SAFEWAY project.
</td> </tr>
<tr>
<td>
Data destruction. How is data destruction handled? Compliance with EU /
national legislation.
</td>
<td>
Data will be stored indefinitely, with no planned destruction.
</td> </tr> </table>
**4.4 No 4: Satellite data**
<table>
<tr>
<th>
**Satellite data**
</th> </tr>
<tr>
<td>
**Data identification**
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
Sentinel-1 images
</td> </tr>
<tr>
<td>
Source
</td>
<td>
Copernicus Open Access Hub
</td> </tr>
<tr>
<td>
**Partners activities and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable)
</td>
<td>
Any **Sentinel data** available through the Sentinel Data Hub will be governed
by the Legal Notice on the use of Copernicus
Sentinel Data and Service Information.
</td> </tr>
<tr>
<td>
Partner in charge of the data collection
</td>
<td>
Planetek Italia
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis
</td>
<td>
Planetek Italia
</td> </tr>
<tr>
<td>
Partner in charge of the data storage
</td>
<td>
Planetek Italia
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP3 – Displacement monitoring of infrastructures (roads and railways)
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (production and storage dates, places) and documentation?
</td>
<td>
The metadata information are stored within a product.xml file
</td> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
OGC standard format. Volume: about TB.
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
The Sentinel-1 images will be exploited using the Multi-Temporal
Interferometry algorithm through the Rheticus ® platform.
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level: confidential (only for members of
the Consortium and the Commission Services) or Public
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
**Satellite data**
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
Access through the Rheticus ® platform protected by Username and Password.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Personal data protection: are there personal data? If so, have you gained
(written) consent from data subjects to collect this information?
</td>
<td>
No personal data
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup): Where? For how long?
</td>
<td>
The data will be stored within the cloud service platform Rheticus ® owned
by Planetek Italia for the entire duration of the project.
</td> </tr>
<tr>
<td>
Data destruction. How is data destruction handled? Compliance with EU /
national legislation.
</td>
<td>
The data will be deleted in the cloud platform Rheticus ® five years after
the end of the project.
</td> </tr> </table>
**4.5 No 5: Experts interviews**
<table>
<tr>
<th>
**EXPERTS INTERVIEWS**
</th> </tr>
<tr>
<td>
**Data identification**
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
The data contain transcriptions and notes from expert interviews with
researchers and policy makers. They will be either conducted personally, on
the phone (or skype) or they can also be conducted in written form. Include
findings from completed/ongoing EU projects
</td> </tr>
<tr>
<td>
Source
</td>
<td>
Interviews with experts
</td> </tr>
<tr>
<td>
**Partners activities and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable)
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Partner in charge of the data collection
</td>
<td>
TØI
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis
</td>
<td>
TØI
</td> </tr>
<tr>
<td>
Partner in charge of the data storage
</td>
<td>
TØI
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP4 and 6
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (production and storage dates, places) and documentation?
</td>
<td>
Production August 2019, anonymised data stored on secure server
</td> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
Word documents
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
Gather state-of-the-art knowledge on risk tolerance, aspects of psychology and
behaviour of different user groups.
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level: confidential (only for members of
the Consortium and the Commission Services) or Public
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
**EXPERTS INTERVIEWS**
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
Scientific articles
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Personal data protection: are there personal data? If so, have you gained
(written) consent from data subjects to collect this information?
</td>
<td>
Informed Consent Forms are defined in Appendix 1.
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup): Where? For how long?
</td>
<td>
Data will be stored in secured servers of the partner in charge of the
dataset, where only research members will be granted access to the information
within the dataset.
The Consortium will take into account that for the purposes of the SAFEWAY
project the retention period is the one used in the relevant field, by analogy
to the administrative and financial issues this will be 5 years.
</td> </tr>
<tr>
<td>
Data destruction. How is data destruction handled? Compliance with EU /
national legislation.
</td>
<td>
Data destruction will be carried out following the partner’s guidelines on
data destruction, which will always comply with EU and national legislation,
and international standards such as ISO 27002:2017.
</td> </tr> </table>
**4.6 No 6: Data on risk tolerance**
<table>
<tr>
<th>
**DATA ON RISK TOLERANCE**
</th> </tr>
<tr>
<td>
**Data identification**
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
This includes the evaluation of risk tolerance of different actors and
scheduling for use in focus groups, and follow-up surveys with different user
representatives.
</td> </tr>
<tr>
<td>
Source
</td>
<td>
Focus groups and surveys
</td> </tr>
<tr>
<td>
**Partners activities and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable)
</td>
<td>
TØI
</td> </tr>
<tr>
<td>
Partner in charge of the data collection
</td>
<td>
TØI
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis
</td>
<td>
TØI
</td> </tr>
<tr>
<td>
Partner in charge of the data storage
</td>
<td>
TØI
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP4, 6
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (production and storage dates, places) and documentation?
</td>
<td>
Production circa Jan 2020, anonymised data stored on secure server
</td> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
Word documents
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
Gather knowledge on risk tolerance, aspects of psychology and behaviour of
different user groups.
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level: confidential (only for members of
the Consortium and the Commission Services) or Public
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
**DATA ON RISK TOLERANCE**
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
Scientific articles
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Personal data protection: are there personal data? If so, have you gained
(written) consent from data subjects to collect this information?
</td>
<td>
Consent will be gathered following templates in Appendix 1 and Appendix 2.
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup): Where? For how long?
</td>
<td>
Data will be stored in secured servers of the partner in charge of the
dataset, where only research members will be granted access to the information
within the dataset.
The Consortium will take into account that for the purposes of the SAFEWAY
project the retention period is the one used in the relevant field, by analogy
to the administrative and financial issues this will be 5 years.
</td> </tr>
<tr>
<td>
Data destruction. How is data destruction handled? Compliance with EU /
national legislation.
</td>
<td>
Data destruction will be carried out following the partner’s guidelines on
data destruction, which will always comply with EU and national legislation,
and international standards such as ISO 27002:2017.
</td> </tr> </table>
**4.7 No 7: Sociotechnical system analysis**
<table>
<tr>
<th>
**SOCIOTECHNICAL SYSTEM ANALYSIS**
</th> </tr>
<tr>
<td>
**Data identification**
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
Selected cases will be documented to represent a range of event types
occurring in Europe. Interviews and template analysis will be conducted with
people both managing and caught up in the extreme events studied.
</td> </tr>
<tr>
<td>
Source
</td>
<td>
Document analyses
</td> </tr>
<tr>
<td>
**Partners activities and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable)
</td>
<td>
TØI
</td> </tr>
<tr>
<td>
Partner in charge of the data collection
</td>
<td>
TØI
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis
</td>
<td>
TØI
</td> </tr>
<tr>
<td>
Partner in charge of the data storage
</td>
<td>
TØI
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP4 and 6
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (production and storage dates, places) and documentation?
</td>
<td>
Production circa June 2020, anonymised data stored on secure server
</td> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
Word documents
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
Gather knowledge on risk tolerance, aspects of psychology and behaviour of
different user groups.
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level: confidential (only for members of
the Consortium and the Commission Services) or Public
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
**SOCIOTECHNICAL SYSTEM ANALYSIS**
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
Scientific articles, report
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Personal data protection: are there personal data? If so, have you gained
(written) consent from data subjects to collect this information?
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup): Where? For how long?
</td>
<td>
Data will be stored in secured servers of the partner in charge of the
dataset, where only research members will be granted access to the information
within the dataset.
The Consortium will take into account that for the purposes of the SAFEWAY
project the retention period is the one used in the relevant field, by analogy
to the administrative and financial issues this will be 5 years.
</td> </tr>
<tr>
<td>
Data destruction. How is data destruction handled? Compliance with EU /
national legislation.
</td>
<td>
Data destruction will be carried out following the partner’s guidelines on
data destruction, which will always comply with EU and national legislation,
and international standards such as ISO 27002:2017.
</td> </tr> </table>
**4.8 8: Infrastructure assets data**
<table>
<tr>
<th>
**INFRASTRUCTURE ASSETS DATA**
</th> </tr>
<tr>
<td>
**Data identification**
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
Database of infrastructures with identification, conservation state,
inspections and structural detailing
</td> </tr>
<tr>
<td>
Source
</td>
<td>
Infraestruturas de Portugal; Ferrovial; Network Rails
</td> </tr>
<tr>
<td>
**Partners activities and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable)
</td>
<td>
Infraestruturas de Portugal; Ferrovial; Network Rails
</td> </tr>
<tr>
<td>
Partner in charge of the data collection
</td>
<td>
University of Minho; University of
Cambridge; Infrastructure Management
Consultants Gmbh
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis
</td>
<td>
University of Minho; University of
Cambridge; Infrastructure Management
Consultants Gmbh
</td> </tr>
<tr>
<td>
Partner in charge of the data storage
</td>
<td>
University of Minho
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP5 – Task 5.1
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (production and storage dates, places) and documentation?
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
Tables (.xls format) and georeferenced maps (.klm format)
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
Development of predictive models for projecting risks of future infrastructure
damage, shutdown and deterioration. Based on the database, and analytical and
stochastic/probabilistic approaches, the most suitable models for risk and
impact projections will be selected.
</td> </tr>
<tr>
<td>
**INFRASTRUCTURE ASSETS DATA**
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level: confidential (only for members of
the Consortium and the Commission Services) or Public
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
Database is to be used by members of the Consortium and the derived results
are to be reviewed by the partner owner of data prior to publication
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
Not applicable.
</td> </tr>
<tr>
<td>
Personal data protection: are there personal data? If so, have you gained
(written) consent from data subjects to collect this information?
</td>
<td>
There is no personal data
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup): Where? For how long?
</td>
<td>
Data will be stored in a physical external disk for storage during the
duration of the project. A copy will also be accessible on a restricted online
server for the partners involved in Task 5.1.
</td> </tr>
<tr>
<td>
Data destruction. How is data destruction handled? Compliance with EU /
national legislation.
</td>
<td>
Data will be retained five years after the project ends. Data destruction will
be carried out following the partner’s guidelines on data destruction, which
will always comply with EU and national legislation, and international
standards such as ISO 27002:2017.
</td> </tr> </table>
**4.9 9: Information on the value systems**
<table>
<tr>
<th>
**INFORMATION ON THE VALUE SYSTEM**
</th> </tr>
<tr>
<td>
**Data identification**
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
The information on the value systems, decision making processes and key
performance indicators that transportation infrastructure agencies and
stakeholders within the project use in management of their assets. The contact
information that is collected includes email addresses, names and
affiliations.
</td> </tr>
<tr>
<td>
Source
</td>
<td>
On-line survey developed on a freeware software platform.
</td> </tr>
<tr>
<td>
**Partners activities and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable)
</td>
<td>
IMC; N/A
</td> </tr>
<tr>
<td>
Partner in charge of the data collection
</td>
<td>
IMC
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis
</td>
<td>
IMC
</td> </tr>
<tr>
<td>
Partner in charge of the data storage
</td>
<td>
IMC
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP6, Task 6.1
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (production and storage dates, places) and documentation?
</td>
<td>
None.
</td> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
.xls (MS Excel format).
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
The data will be used in WP6 – for development of a robust decision support
framework for short and medium to longterm maintenance planning.
</td> </tr>
<tr>
<td>
**INFORMATION ON THE VALUE SYSTEM**
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level: confidential (only for members of
the Consortium and the Commission Services) or Public
</td>
<td>
Currently confidential. Perhaps public after the project completion.
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
See under data access policy.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
See under data access policy.
</td> </tr>
<tr>
<td>
Personal data protection: are there personal data? If so, have you gained
(written) consent from data subjects to collect this information?
</td>
<td>
Yes, there are. It is planned to include related consent as a part of the
survey, so subjects may comply. This will be done considering the templates
from Appendix 1 and 2.
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup): Where? For how long?
</td>
<td>
Data will be stored in secured servers of the partner in charge of the
dataset, where only research members will be granted access to the information
within the dataset.
The Consortium will take into account that for the purposes of the SAFEWAY
project the retention period is the one used in the relevant field, by analogy
to the administrative and financial issues this will be 5 years.
</td> </tr>
<tr>
<td>
Data destruction. How is data destruction handled? Compliance with EU /
national legislation.
</td>
<td>
Data destruction will be carried out following the partner’s guidelines on
data destruction, which will always comply with EU and national legislation,
and international standards such as ISO 27002:2017.
</td> </tr> </table>
**4.10 10: Stakeholders contact collection**
<table>
<tr>
<th>
**STAKEHOLDERS CONTACT COLLECTION**
</th> </tr>
<tr>
<td>
**Data identification**
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
The data contain information on the main stakeholders of SAFEWAY along the
major stakeholder groups. They include infrastructure managers, operators,
public administrations, researchers, practitioners, policy makers. The contact
information that is collected includes the name, institutional affiliation,
position, email address, phone number and office address.
</td> </tr>
<tr>
<td>
Source
</td>
<td>
Archives of SAFEWAY partners.
</td> </tr>
<tr>
<td>
**Partners activities and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable)
</td>
<td>
UVIGO; N/A
</td> </tr>
<tr>
<td>
Partner in charge of the data collection
</td>
<td>
UVIGO
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis
</td>
<td>
UVIGO
</td> </tr>
<tr>
<td>
Partner in charge of the data storage
</td>
<td>
UVIGO
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP10:
-Task 10.1 (Dissemination, communication and IP management).
-Task 10.2 (Standardization activities)
-Task 10.3 (Technology transfer activities)
-Task 10.4 (Collaboration and clustering)
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (production and storage dates, places) and documentation?
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
This dataset can be imported from, and exported to a CSV, TXT or Excel file.
</td> </tr>
<tr>
<td>
**STAKEHOLDERS CONTACT COLLECTION**
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
This dataset is only used to disseminate the results obtained through SAFEWAY
project.
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level: confidential (only for members of
the Consortium and the Commission Services) or Public
</td>
<td>
As this dataset can contain personal data, only the partner in charge of the
data collection will have access to the raw data. Data that is publicly
available will be share among consortium partners.
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
None
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Personal data protection: are there personal data? If so, have you gained
(written) consent from data subjects to collect this information?
</td>
<td>
This dataset can include some personal data. Before collecting any personal
data that is not publicly available, informed consents from subjects will be
gained. Consent will be gathered following the template from Appendix 1.
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup): Where? For how long?
</td>
<td>
Data will be stored in secured servers of the partner in charge of the
dataset, where only research members will be granted access to the information
within the dataset.
The Consortium will take into account that for the purposes of the SAFEWAY
project the retention period is the one used in the relevant field, by analogy
to the administrative and financial issues this will be 5 years.
</td> </tr>
<tr>
<td>
Data destruction. How is data destruction handled? Compliance with EU /
national legislation.
</td>
<td>
Data destruction will be carried out following the partner’s guidelines on
data destruction, which will always comply with EU and national legislation,
and international standards such as ISO 27002:2017. In this particular case
the Spanish national public administration’s guidelines on electronic dada
destruction will be considered.
</td> </tr> </table>
**4.11 11 Dissemination events data**
<table>
<tr>
<th>
**Dissemination events Data**
</th> </tr>
<tr>
<td>
**Data identification**
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
The dataset contains the contact information of attendees to events organised
by SAFEWAY (SAFEWAY workshop, Parallel Events, SAFEWAY Webcast), provided
during their registration to the event. The collected contact information, may
include: name, institutional affiliation, position, email address, phone
number and office address.
In addition a voluntary survey will be circulated at the end of the event to
collect the feedback from the attendees in order to continuously increase
quality.
</td> </tr>
<tr>
<td>
Source
</td>
<td>
Archives of SAFEWAY partners.
</td> </tr>
<tr>
<td>
**Partners activities and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable)
</td>
<td>
UVIGO; N/A
</td> </tr>
<tr>
<td>
Partner in charge of the data collection
</td>
<td>
UVIGO
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis
</td>
<td>
UVIGO
</td> </tr>
<tr>
<td>
Partner in charge of the data storage
</td>
<td>
UVIGO
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP10:
-Task 10.3 (Technology transfer activities)
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (production and storage dates, places) and documentation?
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
This dataset can be imported from, and exported to a CSV, TXT or Excel file.
</td> </tr>
<tr>
<td>
**Dissemination events Data**
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
This dataset is only used for dissemination of the results obtained through
SAFEWAY project.
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level: confidential (only for members of
the Consortium and the Commission Services) or Public
</td>
<td>
As this dataset can contain personal data, only the partner in charge of the
data collection will have access to the raw data. Data that is publicly
available will be shared among consortium partners.
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
None
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Personal data protection: are there personal data? If so, have you gained
(written) consent from data subjects to collect this information?
</td>
<td>
This dataset can include some personal data. Before collecting any personal
data that is not publicly available, informed consents from subjects will be
gained. Consent will be gathered following the template from Appendix 1.
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup): Where? For how long?
</td>
<td>
Data will be stored in secured servers of the partner in charge of the
dataset, where only research members will be granted access to the information
within the dataset.
The Consortium will take into account that for the purposes of the SAFEWAY
project the retention period is the one used in the relevant field, by analogy
to the administrative and financial issues this will be 5 years.
</td> </tr>
<tr>
<td>
Data destruction. How is data destruction handled? Compliance with EU /
national legislation.
</td>
<td>
Data destruction will be carried out following the partner’s guidelines on
data destruction, which will always comply with EU and national legislation,
and international standards such as ISO 27002:2017. In this particular case
the Spanish national public administration’s guidelines on electronic dada
destruction will be considered.
</td> </tr> </table>
**4.12Dataset No 12 Stakeholders feedback**
<table>
<tr>
<th>
**STAKEHOLDERS FEEDBACK**
</th> </tr>
<tr>
<td>
**Data identification**
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
Dataset containing responses from key stakeholders and Advisory Board members
to different technical feedback surveys that will be produced during the
project to gather feedback about the technical implementation of the project.
</td> </tr>
<tr>
<td>
Source
</td>
<td>
A set of on-line surveys developed on a freeware software platform.
</td> </tr>
<tr>
<td>
**Partners activities and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable)
</td>
<td>
UVIGO; N/A
</td> </tr>
<tr>
<td>
Partner in charge of the data collection
</td>
<td>
UVIGO
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis
</td>
<td>
UVIGO
</td> </tr>
<tr>
<td>
Partner in charge of the data storage
</td>
<td>
UVIGO
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP10
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (production and storage dates, places) and documentation?
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
This dataset can be imported from, and exported to a CSV, TXT or Excel file.
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
The information gathered though the surveys will support the development of
the SAFEWAY methodologies and tools.
The information will also contribute to quantify SAFEWAY’s impact.
</td> </tr>
<tr>
<td>
**STAKEHOLDERS FEEDBACK**
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level: confidential (only for members of
the Consortium and the Commission Services) or Public
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
Database is to be used by members of the Consortium and the derived results
are to be reviewed by the partner owner of data prior to publication
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
Not applicable.
</td> </tr>
<tr>
<td>
Personal data protection: are there personal data? If so, have you gained
(written) consent from data subjects to collect this information?
</td>
<td>
Surveys will be anonymised or when not possible pseudo- anonymised. Only
restricted project personnel have access to the raw survey data. When
transferred to the “.XMS” (MS Excel) database, personal data (in case these
exists) will be omitted.
Consent will be gathered following the template from Appendix 1\.
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup): Where? For how long?
</td>
<td>
Data will be stored in secured servers of the partner in charge of the
dataset, where only research members will be granted access to the information
within the dataset.
The Consortium will take into account that for the purposes of the SAFEWAY
project the retention period is the one used in the relevant field, by analogy
to the administrative and financial issues this will be 5 years.
</td> </tr>
<tr>
<td>
Data destruction. How is data destruction handled? Compliance with EU /
national legislation.
</td>
<td>
Data destruction will be carried out following the partner’s guidelines on
data destruction, which will always comply with EU and national legislation,
and international standards such as ISO 27002:2017. In this particular case
the Spanish national public administration’s guidelines on electronic dada
destruction will be considered.
</td> </tr> </table>
**4.13Dataset No 13 Crowd-sourced data**
<table>
<tr>
<th>
**Crowd-sourced data**
</th> </tr>
<tr>
<td>
**Data identification**
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
TomTom Traffic Incidents delivers information on the current observed
congestion and incidents on roads in all countries where we offer this
service. Traffic ‘incidents’ in this context include traffic jams, road
closures, lane closures, construction zones, and accidents. TomTom Traffic
Flow delivers a detailed view of the current observed speed and travel times
on the entire road network in all countries where TomTom Traffic is available.
This product is designed for easy integration into routing engines to
calculate precise travel times.
</td> </tr>
<tr>
<td>
Source
</td>
<td>
Real-time traffic products are created by merging multiple data sources,
including anonymized measurement data from over 550 million GPS-enabled
devices. Using highly granular data, gathered on nearly every stretch of road,
we can calculate travel times and speeds continuously.
</td> </tr>
<tr>
<td>
**Partners activities and responsibilities**
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable)
</td>
<td>
Innovactory; TomTom
</td> </tr>
<tr>
<td>
Partner in charge of the data collection
</td>
<td>
Innovactory
</td> </tr>
<tr>
<td>
Partner in charge of the data analysis
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Partner in charge of the data storage
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP4 task T4.1
WP5 task T5.1
</td> </tr>
<tr>
<td>
**Standards**
</td> </tr>
<tr>
<td>
Info about metadata (production and storage dates, places) and documentation?
</td>
<td>
_https://developer.tomtom.com/trafficapi/traffic-api-documentation-
trafficflow/flow-segment-data_
</td> </tr> </table>
<table>
<tr>
<th>
**Crowd-sourced data**
</th> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
Traffic Incidents via TPEG2-TEC / Traffic Flow via TPEG2-TFP as well as OpenLR
to deliver reports that describe incidents or congestion on any road, on any
map.
DATEX II is an industry standard for information exchange between service
providers, application developers and traffic management centers.
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
The data will be used in WP4 and WP5 and can only be used in the setting of
SAFEWAY under the conditions Innovactory agreed with TomTom. Data for later
use
are available but under different commercial agreements.
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level: confidential (only for members of
the Consortium and the Commission Services) or Public
</td>
<td>
No personal data are supplied, Privacy data which are collected by TomTom will
be within 24 hours automatically and irreversibly destroyed at TomTom side.
Data that would allow TomTom to identify users from their location data are
destroyed. Innovactory only receives the merged incident and flow data which
are supplied to Safeway.
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
Only for SAFEWAY demo purpose.
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
None
</td> </tr>
<tr>
<td>
Personal data protection: are there personal data? If so, have you gained
(written) consent from data subjects to collect this information?
</td>
<td>
Data provided by TomTom to Innovactory do not contain any personal data and it
has been made sure that the information contained in data cannot allow to the
identifications of any person or personal data.
In order to ensure personal data protection, a continuous monitoring approach
will be followed. In this sense, the personal data protection
systems/procedures implemented by TomTom will be reviewed, every six months,
to detect any posible changes.
</td> </tr> </table>
<table>
<tr>
<th>
**Crowd-sourced data**
</th> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
Data storage (including backup): Where? For how long?
</td>
<td>
Innovactory does not store flow or incident data.
</td> </tr>
<tr>
<td>
Data destruction. How is data destruction handled? Compliance with EU /
national legislation.
</td>
<td>
If there is a need for buffering the incident/flow data, this data destruction
will be carried out following the partner’s guidelines on data destruction,
which will always comply with EU and national legislation. Innovactory is
implementing currently ISO 27001 (target finished Q2 2020).
</td> </tr> </table>
## **5\. Outlook Towards Next DMP**
As stated in Table 1 of the Introduction, the next iteration of the DMP will
be prepared in month 30 of the project, just after WPs 3, 4 and 5 will finish.
Also, every other work package (apart from WP2 which ends in M18) and their
tasks will be underway. Most of the data included in the dataset described in
this document will either being under collection or have been collected
Therefore, the upcoming DMP will provide an update regarding the status of
each one of the dataset and the planned work until the expected action until
the end of the project. Furthermore, in case additional datasets are
identified as necessary for the completion of the project, this will be
reported in the upcoming version of the DMP. For this purpose, and to ease the
identification of possible needs to ensure personal data protection, a
guidance document has been produced (Appendix 4) for partners to consult, and
identify (if is the case) potential additional datasets.
## **6\. Update of the Ethical Aspects**
At this stage of the project, two are the main ethical aspects to review. In
first place the outcome of the continuous monitoring process on ethical
aspects, in particular regarding vehicle data crowdsourcing and interviews or
surveys carried out during the development of WP4. Then, the report of the
Ethics Mentor.
**6.1 Ongoing monitoring**
The ongoing monitoring regarding SAFEWAY ethical aspects has been focused, at
a first stage, in identifying those tasks with relevance for data protection
within the different activities of the project. It was concluded that the data
protection risk posed by SAFEWAY is fairly limited, as the only task that
might involve personal data collection is related to dissemination activities
in workshops and meetings (see Sections 4.10 and 4.11), and an explicit and
verifiable consent will be obtained prior to any data collection, as required
by the GDPR. Furthermore, procedures for collection, processing, storage,
retention and destruction of data have been defined to ensure its compliance
with the legislative framework (see tables in Section 4). In addition, for
those activities that require it (interviews and surveys) an informed consent
form, together with an information sheet about the research study, were
defined (see Appendices 1 and 2). In addition, any website or online based
form, used for surveys, or for people to register to attend dissemination
events or meetings, would require for the person filling it to accept a
Privacy Policy that also complies with the GDPR requirements, such as the one
contained in Appendix 2 or in the link ( _https://www.safeway-project.eu/en/_
_personal-data_ ) .
**6.2 Report of the Ethics Mentor**
The Appointed Ethics Mentor (EM) for SAFEWAY is Ms Vanesa Alarcón Caparrós
(see deliverable D11.3). Following are summarised the tasks carried out by the
EM until the submission of this deliverable:
* The EM has reviewed documents linked to the project that could contain any personal data treatment, situations or other documents, where SAFEWAY Partners have had any doubt, or where these may represent any data treatment or risks. Following the review, the EM has always indicated the corresponding measures or recommendations for partners to take. As a result, for example, Appendix 2 has been updated several times. Last version is attached in this DMP.
* The EM has kept a continuous monitoring where the project in any of its different phases - as mentioned in section 3 - is collecting or doing any personal data treatment. In this sense it has been detected that large amount of data has been or will be collected by the project, however, the majority of this data do not contain personal information.
* The EM has delivered a guidance to avoid future risks in data treatment (see Appendix 3). The guidance describes several advices and recommendations for partners when preparing surveys during the project, in order to identify if there are data that cannot be collected. Furthermore, the guidance also proposes ways to collect these data, such as online or physical forms (an informed consent on data treatment must be collected always that personal data are considered, however, the ways to collect it differs depending on the format on how the information is collected). See details in Appendix 3.
* The EM has provided a questionnaire (see Appendix 4) about the interoperation and exchange of data, in order to detect if there is any potential risk regarding to personal data treatment, when the project is collecting information.
* The EM has recommended to establish a procedure for data destruction, not only to delete personal information but also confidential information (see section 4).
* The EM has recommended to prepare a Code of Best Practices for employees, partners and freelances when working with personal information. It must be personalised for this type of professionals because every professional could be affected in different ways. In particular, it is recommended to establish a Bring Your Own Device policy when needed.
* The EM has suggested, for technical implementation of data management and destruction, to follow the instructions of ISO 27002:2017 or similar procedures validated in the market.
* The EM has provided feedback to the present version of the Data Management Plan. In particular, regarding the introduction of some corrections and to add some fields to consider more aspects connected with personal data treatments.
The EM has been in close contact with the coordinator (UVIGO) via direct mail,
phone and regular meeting and has had access to all the information about the
activities of the project that could imply ethically relevant aspects via
email and accessing the common repository of SAFEWAY. Following are described
the expected actions to be carried out by the EM in the upcoming month.
* The DMP will be updated in months 30 and 42, as indicated in Section 1. As part of this update the status of each dataset must considered, however, SAFEWAY Partners must be conscious on any possible doubts regarding data treatment, in which case these doubts will be addressed to the EM. In this sense and in compliance with proactivity principle established in European regulation a communication must be sent by the coordinators of the project informing the partners about this obligation.
* The EM will schedule regular meetings with the Coordinator, every six months until month 42, to check if there are doubts or any situation that must be considered or reviewed by the EM.
* The EM will be available to attend formal project meetings remotely or in person, if required partners will notify it in advance by including the pertinent action/s in the meeting agenda.
# Acknowledgements
This deliverable was carried out in the framework of the GIS-Based
Infrastructure Management System for Optimized Response to Extreme Events of
Terrestrial Transport Networks (SAFEWAY) project, which has received funding
from the European Union’s Horizon 2020 research and innovation programme under
grant agreement No 769255.
# SAFEWAY
GIS-BASED INFRASTRUCTURE MANAGEMENT SYSTEM
FOR OPTIMIZED RESPONSE TO EXTREME EVENTS OF TERRESTRIAL TRANSPORT NETWORKS
**Grant Agreement No. 769255**
**Data Management Plan (DMP) V1** \- **Appendices**
WP 1 Overall project coordination
<table>
<tr>
<th>
**Deliverable ID**
</th>
<th>
**D1.3**
</th> </tr>
<tr>
<td>
**Deliverable name**
</td>
<td>
**Data Management Plan (DMP) V1**
</td> </tr>
<tr>
<td>
Lead partner
</td>
<td>
UVIGO
</td> </tr>
<tr>
<td>
Contributors
</td>
<td>
DEMO, PNK, UMINHO, IMC
</td> </tr> </table>
**PUBLIC**
## PROPRIETARY RIGHTS STATEMENT
This document contains information, which is proprietary to the SAFEWAY
Consortium.
Neither this document nor the information contained herein shall be used,
duplicated or communicated by any means to any third party, in whole or in
parts, except with prior written consent of the SAFEWAY Consortium.
## **Appendices Contents**
* **Appendix 1: Informed Consent Form**
* **Appendix 2: Protection of Personal Data within SAFEWAY**
* **Appendix 3: Guideline for the Elaboration of Surveys**
* **Appendix 4: Interoperability and Exchange of Data**
LEGAL NOTICE
The sole responsibility for the content of this publication lies with the
authors. It does not necessarily reflect the opinion of the European Union.
Neither the Innovation and
Networks Executive Agency (INEA) nor the European Commission are responsible
for any use that may be made of the information contained therein.
<table>
<tr>
<th>
**Appendix 1.**
</th>
<th>
**Informed Consent Form**
</th> </tr> </table>
GIS-Based Infrastructure Management System for Optimized
Response to Extreme Events of Terrestrial Transport Networks
**INFORMED CONSENT FORM**
<table>
<tr>
<th>
Project acronym
</th>
<th>
SAFEWAY
</th> </tr>
<tr>
<td>
Project name
</td>
<td>
GIS-BASED INFRASTRUCTURE MANAGEMENT SYSTEM
FOR OPTIMIZED RESPONSE TO EXTREME EVENTS OF
TERRESTRIAL TRANSPORT NETWORKS
</td> </tr>
<tr>
<td>
Grant Agreement no.
</td>
<td>
769255
</td> </tr>
<tr>
<td>
Project type
</td>
<td>
Research and Innovation Action
</td> </tr>
<tr>
<td>
Start date of the project
</td>
<td>
01/09/2018
</td> </tr>
<tr>
<td>
Duration in months
</td>
<td>
42
</td> </tr>
<tr>
<td>
This project has received funding from the European Union’s Horizon 2020
research and innovation programme under Grant Agreement No 769255\.
</td> </tr>
<tr>
<td>
Disclaimer: This document reflects only the views of the author(s). Neither
the Innovation and Networks Executive Agency (INEA) nor the European
Commission is in any way responsible for any use that may be made of the
information it contains.
</td> </tr> </table>
**SAFEWAY event:**
**Date: Location:**
**General Data Protection Regulation (GDPR) Compliance**
Data that is collected and processed for the purposes of facilitating and
administering SAFEWAY workshops and events is subjected to GDPR to the EU
General Data Protection Regulation (GDPR) which became applicable from the
25th of May 2018. Please see the document “POPD SAFEWAY.pdf” for further
guidance on our data management policies. To process your application, we
require your consent to the following (please check each box as appropriate).
<table>
<tr>
<th>
**Please circle as necessary**
</th> </tr>
<tr>
<td>
I give my consent for all personal information provided by registering to the
SAFEWAY ( _workshop/event name_ ) to be stored and processed by relevant
SAFEWAY project partners for Data Management Purposes.
</td>
<td>
**Yes**
</td>
<td>
**No**
</td> </tr>
<tr>
<td>
I give my consent for all personal information provided by registering to the
SAFEWAY ( _workshop/event name_ ) to be stored and processed by SAFEWAY
partners for the purpose of administering the SAFEWAY ( _workshop/event name_
).
</td>
<td>
**Yes**
</td>
<td>
**No**
</td> </tr>
<tr>
<td>
I give my consent for all personal information provided by registering to the
SAFEWAY ( _workshop/event name_ ) to be processed by the SAFEWAY (
_workshop/event name_ ) organizers to evaluate and decide on my application
where workshop places are limited.
</td>
<td>
**Yes**
</td>
<td>
**No**
</td> </tr>
<tr>
<td>
I give my consent for all personal information provided by registering to the
SAFEWAY ( _workshop/event name_ ) to be stored and processed by UVIGO for the
purpose of overall coordination of the SAFEWAY project.
</td>
<td>
**Yes**
</td>
<td>
**No**
</td> </tr>
<tr>
<td>
I give my consent for all personal information provided by registering to the
SAFEWAY ( _workshop/event name_ ) to be passed to UVIGO and FERROVIAL for
storage and processing for the purposes of supporting exploitation and
dissemination of workshop related information.
</td>
<td>
**Yes**
</td>
<td>
**No**
</td> </tr>
<tr>
<td>
I give my consent for the following personal information to be passed on to
the European Commission in case my workshop application is approved: name,
surname, title, organization, position, email address, phone number.
</td>
<td>
**Yes**
</td>
<td>
**No**
</td> </tr>
<tr>
<td>
I give my consent for the following personal information to be published on
the Internet and elsewhere for the purposes of project transparency: name,
surname and organisation affiliation.
</td>
<td>
**Yes**
</td>
<td>
**No**
</td> </tr>
<tr>
<td>
I give my consent for my e-mail address to be published on the Internet or
elsewhere to assist others to contact me (optional).
</td>
<td>
**Yes**
</td>
<td>
**No**
</td> </tr> </table>
**_PARTICIPANT CERTIFICATION_ **
I have read the _PROTECTION OF PERSONAL DATA WITHIN SAFEWAY_
_(_ _https://www.safeway-project.eu/en/personal-data_ _)_ and answered to all
the questions on the table above. I have had the opportunity to ask, and I
have received answers to, any questions I had regarding the protection of my
personal data. By my signature I affirm that I am at least 18 years old and
that I have received a copy of this Consent and Authorization form.
…………………………………………………………………………………………………
Name and surname of participant
…………………………………………………………………………………………………
Place, date and signature of participant
**NB: Attach this completed form to your SAFEWAY _(workshop/event name)_
application. **
Further information: for any additional information or clarification about
Data Protection treatment please contact the SAFEWAY coordinator at UVIGO (
[email protected]_ ), or alternatively the University of Vigo’s Data
Protection Officer (DPO), Ana Garriga Domínguez with address at: Faculty of
Law, As Lagoas s/n, 32004, Ourense, Spain ( [email protected]_ ). This consent
form does not remove any of your rights under GDPR but provides us with the
necessary permissions to process your application and manage SAFEWAY workshops
and events.
<table>
<tr>
<th>
**Appendix 2.**
</th>
<th>
**Protection of Personal Data Within SAFEWAY**
</th> </tr> </table>
GIS-Based Infrastructure Management System for Optimized
Response to Extreme Events of Terrestrial Transport Networks
**PROTECTION OF PERSONAL DATA WITHIN SAFEWAY**
<table>
<tr>
<th>
Project acronym
</th>
<th>
SAFEWAY
</th> </tr>
<tr>
<td>
Project name
</td>
<td>
GIS-BASED INFRASTRUCTURE MANAGEMENT SYSTEM
FOR OPTIMIZED RESPONSE TO EXTREME EVENTS OF
TERRESTRIAL TRANSPORT NETWORKS
</td> </tr>
<tr>
<td>
Grant Agreement no.
</td>
<td>
769255
</td> </tr>
<tr>
<td>
Project type
</td>
<td>
Research and Innovation Action
</td> </tr>
<tr>
<td>
Start date of the project
</td>
<td>
01/09/2018
</td> </tr>
<tr>
<td>
Duration in months
</td>
<td>
42
</td> </tr>
<tr>
<td>
This project has received funding from the European Union’s Horizon 2020
research and innovation programme under Grant Agreement No 769255\.
</td> </tr>
<tr>
<td>
Disclaimer: This document reflects only the views of the author(s). Neither
the Innovation and Networks Executive Agency (INEA) nor the European
Commission is in any way responsible for any use that may be made of the
information it contains.
</td> </tr> </table>
**PROTECTION OF PERSONAL DATA WITHIN SAFEWAY**
**_INTRODUCTION_ **
The SAFEWAY project assumes the responsibility of complying with current
legislation on data protection, guaranteeing the protection of personal
information in a lawful and transparent manner in accordance with Regulation
(EU) 2016/679 of the European Parliament and of the Council of April 27, 2016,
regarding the protection of individuals with regard to the processing of
personal data and their free circulation (GDPR), with the national, regional
and the University of Vigo’s internal 2 regulations regarding the protection
of personal data.
This document informs in detail the circumstances and conditions of the
processing of personal data and the rights that assist the interested persons.
As coordinator of the action, the University of Vigo is the data controller
for all personal data being collected for workshops and other communication
and dissemination events. The University of Vigo has appointed as Data
Protection Officer (DPO) to: Ana Garriga Domínguez with address at: Faculty of
Law, As Lagoas s/n, 32004, Ourense, Spain ( [email protected]_ ).
**_PURPOSE:_ **
SAFEWAY partners will only collect the personal data strictly necessary in
relation to the purposes for which they are treated, in accordance with the
principles set in Article 5 of the GDPR. The information necessary to
guarantee a fair and transparent treatment will be provided to the interested
persons at the moment of collection, in accordance with the provisions of
articles 13 and 14 of the GDPR.
The data collected by SAFEWAY for the dissemination activities aims to reach
the widest audience to disseminate SAFEWAY project outcomes and to communicate
the knowledge gained by its partners during the duration of the project.
The workshops or meetings with stakeholder are focused to present and discuss
all project results, not only among project partners but also open to
stakeholders and other target groups. The events will be targeted to
technology innovators on infrastructure management, including end-users,
materials and technology suppliers, the research community, regulatory agency,
standardization bodies and all the potential players interested in fields
associated to innovative resilience of transport infrastructure with special
focus on their application in railway and roads.
**_PROCESSING OF PERSONAL DATA:_ **
Your Personal Data is freely provided. Where it is specified in the
registration form, the provision of Personal Data is necessary to provide you
with the services expected from the dissemination event, and the access to
SAFEWY project results. if you refuse to communicate these Data, it may be
impossible for the Data Controller to fulfil your request. On the contrary,
with reference to Personal Data not marked as mandatory, you can refuse to
communicate them and this refusal shall not have any consequence for your
participation and attendance to SAFEWAY dissemination activities.
The provision of your Personal Data for publication of your contact details on
the Internet or elsewhere for networking implemented by the Data Controller is
optional, consequently you can freely decide whether or not give your consent,
or withdraw it at any time. Therefore, if you decide not to give your consent,
SAFEWAY dissemination responsible will not be able to carry out the
aforementioned activities.
SAFEWAY will never collect any special categories of Personal Data (personal
data revealing racial or ethnic origin, political opinions, religious or
philosophical beliefs, or trade union membership, genetic data, biometric
data, data concerning health or data concerning a natural person’s sex life or
sexual orientation – Art. 9 of GDPR). SAFEWAY asks you expressly to avoid
providing these categories of Data. In the event in which you voluntarily
choose to give us these Data, the Company may decide not to process them or to
process them only with your specific consent or, in any event, in compliance
with the applicable law.
In the event of accidental processing of third party Personal Data is
communicated to SAFEWAY, you become an autonomous Data Controller and assume
all the related obligations and responsibilities provided by the law. In this
regard, SAFEWAY is exempt from any liability arising from any claims or
requests made by third parties, whose Data have been processed by us because
of your spontaneous communication of them to us, in violation of the law on
the protection of Personal Data. In any event, if you provide or process third
party Personal Data, you must guarantee as of now, assuming any related
responsibility, that this particular hypothesis of processing is based on a
legal basis pursuant to Art. 6 of GDPR.
**_DATA STORAGE AND RETENTION:_ **
The personal data provided will be kept for the time necessary to fulfill the
purpose for which they are requested and to determine the possible liabilities
that could derive from the same purpose, in addition to the periods
established in the regulations on files and documentation. Unless otherwise
stated, the data will be retained for a period of five years after the end of
the project as this data can support the report of some of the implemented
activities.
During this period, the data will be stored in a secured area with access by a
limited number of researchers. SAFEWAY data managers will apply appropriate
technical and organizational measures to guarantee a level of safety
appropriate to the risk and in accordance with the provisions of article 32 of
the GDPR. The system also allows tracking of use of data. Five years after the
end of the project, the data will be destructed at the surveillance of the
Data Protection Officer at University of Vigo, as coordinating organization of
SAFEWAY.
**_RIGHTS OF THE DATA SUBJECT:_ **
Any person, as the holder of personal data, has the following rights
recognized in the terms and under the conditions indicated in articles 15-22
of the GDPR:
* Right of Access: obtain from the controller confirmation as to whether or not personal data concerning you are being processed, more information on the processing and copy of the personal data processed.
* Right to Rectification obtain from the controller, without undue delay, the rectification of inaccurate personal data concerning you and the right to have incomplete personal data competed.
* Right to Erasure: obtain from the controller, without undue delay, the erasure of personal data concerning you.
* Right to Restriction of Processing: obtain the restriction of the processing in the event you assume that your data are incorrect, the processing is illegal or if these data are necessary for the establishment of legal claims.
* Right to Data Portability: receive the personal data concerning you, which you have provided to a controller, in a structured, commonly used and machine-readable format, in order to transfer these data to another Controller.
* Right to Object: Object, on grounds relating to your particular situation, to the processing of personal data concerning you, unless the controller demonstrates compelling legitimate grounds for the processing. You can also object to processing your data where them are processed for direct marketing purposes.
* Right to withdraw the Consent: withdraw the consent at any time. The withdrawal of consent shall not affect the lawfulness of processing based on consent before its withdrawal.
The subject may exercise their rights without any cost and will have the right
to receive a response, within the deadlines established by current legislation
on data protection, by contacting SAFEWAY project coordinators at:
[email protected]_ , or by contacting the Data Protection Officer at:
[email protected]_ .
**_CONTACT PERSON_ **
For any additional information or clarification please contact SAFEWAY
coordinators at UVIGO ( [email protected]_ ). This consent form does not
remove any of your rights under GDPR but provides us with the necessary
permissions to process your application and manage SAFEWAY workshops and
events.
<table>
<tr>
<th>
**Appendix 3.**
</th>
<th>
**Guidelines for the Elaboration of Surveys**
</th> </tr> </table>
In general, most surveys considered within SAFEWAY (see Section 4) are not
intended to collect personal data. The overall recommendation, when designing
a survey, is that neither the name or other personal data (e.g., email,
address, affiliation) of the person filling the survey should be collected in
the survey. For instance, in the case of the survey being collected by means
of physical forms, this means that the forms should not account with any space
where this information is requested. Alternatively, when the survey is
collected online appropriate mechanisms to avoid the collection of personal
data must be implemented –e.g., although it is possible to send a link to the
online platform where the survey is content, when sending this by means of an
email targeted to a generic group of people or to targeted individuals, it
must be ensured that neither the online platform where the survey is hosted,
or the survey itself does collect any personal data – this includes the email
address to which the link was originally sent.
For those cases where surveys are intended to collect personal data (see
Sections 4.10 and 4.11), these should be designed in such a way that they just
collect the information needed for the purposes that the project has. For
example, it would be reasonable to collect personal data, such as name, age,
studies or profession of someone that could potentially be interested in the
services or products offered by SAFEWAY, however, it would not be acceptable
to collect the personal address, an image or a picture of the person.
Under no circumstances personal data collected in surveys must be beyond the
requirements of the information needed for the purpose of the survey/project.
The following list summarises the most common type of personal data usually
collected in similar type of surveys to those described in Section 4.10 and
4.11. In any case, the SAFEWAY partner in charge of the survey, together with
the Project Coordinator, must analyse the main aim and objectives of the
survey and identify which kind of personal data could be justified to be
collected by the project, or not.
* Name
* Surname
* ID card number
* Personal address
* The country of residence or origin
* A picture or image of a person
* Personal email
* Corporate email
* Phone number
* A fingerprint
* Level of studies or studies
* Ideology or credos
* Political beliefs
* Race
* Sex life
* Health
* Bank account number
* Financial information
* Social media profiles
* Hobbies
If, after conducting this analysis, it is still decided to collect any kind of
personal information, an informed consent (see Appendix 1) must be included
together with the survey. The informed consent must inform the person
answering the survey on who is responsible of the data treatment and on the
ways to contact them, on the purpose of the data treatment, if there is any
information that it is not mandatory to answer, and on the possibility to
exercise their data protection rights. Together with the informed consent a
copy of the Protection to Personal Data policy stablished by the SAFEWAY
project (see Appendix 2) must be included, or alternatively the link to where
this can be find in the SAFEWAY website ( _https://www.safeway-
project.eu/en/personal-data_ ) .
## **Appendix 4\. Interoperability and Exchange of Data**
This project has received funding from the European Union’s Horizon 2020
research and innovation programme under grant
agreement No 769255. The sole responsibility for the content of this
presentation lies with the author. It does not necessari
ly
reflect
the opinion of the European Union. Neither the Innovation and Networks
Executive Agency (INEA) nor the European Commission ar
e
responsible for any use that may be made of the information contained therein.
# Questions to determine Privacy and Confidentiality requirements
* These questions must be answered when; the project intends to collect new data or information, a new phase of the project begins that requires the collection of personal or confidential information, the project uses new programs or systems to collect or process data, or simply when a new project begins that could require data collection.
* Through this questionnaire we want to ensure the protection of the information collected, not only because of privacy or data protection, but also in regards to issues of confidentiality related to the project.
* The aim of these questions is to detect or prevent situations that could pose risks for the project, such as legal issues, security issues or ethical issues.
# Previous recommendations
* When you answer the following questionnaire please take note of whether your answer requires checking another document such as, FAQS or other documents or links
* When you answer the questionnaire and your answer implies a RISK, you should contact your DPO or Ethics Advisor in order to define the type of risk and to consider possible ways to limit or minimize such a risk.
Are you currently using any system that allows for the exchange of personal
data?
**RISK**
If
not
…
If
yes…
**Look at**
**the**
**end**
**(**
**Note**
**1)**
**Are you informing individuals about the treatment and use of their personal
data?**
**How are you processing these type of information?**
❑
By mail in the same
network
❑
By mail but different
networks or connections
❑
Through our platform
❑
Other, please specify (see
if there is any risk):
NO
YES
**RISK***
That information
contents personal or
confidential
information?
How
is
shared
?
YES
**RISK***
_* Look at_
_the_
_end_
_(_
_2)_
_Note_
What type of programs do
you use to process or store
the information?
❑
It is a program licensed?
❑
I don’t use any program to
process the information
❑
It is a program developed
for the company/project?
**RISK**
❑
Have
you
accepted
the
terms
and
conditions
of
the
License
?
❑
Is
there
any
mention
to
privacy
or
confidentiality
?
❑
Yes,
to
both
❑
No
❑
Yes,
but
only
privacy
❑
Yes,
but
only
confidentiality
**RISK**
**RISK**
It
is
important
to
make
a PIA
or
EIPD
or
a
small
analysis
of
risks
**RISK**
Do you share information about the project?
NO
**What are you doing with the personal data after the project?**
<table>
<tr>
<th>
**RISK**
</th> </tr> </table>
<table>
<tr>
<th>
❑ We will delete the information
</th> </tr> </table>
<table>
<tr>
<th>
What measures will you take to do this? Please specify:
* Delete documentary information through a shredder
* Delete digital information formatting the information
* Overprint with 0 and 1 the information after deleting or
formatting
* Demagnetize
(see recommendations at the end)
</th> </tr> </table>
<table>
<tr>
<th>
❑ Store for the legal time period and delete once this period has elapsed
</th> </tr> </table>
<table>
<tr>
<th>
❑ Store forever
</th> </tr> </table>
_*Please contact DPO or_
**RISK**
_Ethics Mentor_
_*Please contact DPO or_
_Ethics Mentor or look at_
_Good Practices Code_
# Recommendations
* **General recommendations**
* **Security measures**
* **Some recommendations about encryption**
* **Technical and physical measures**
* **FAQs**
* **Summary of terms**
* **More info or interesting links good to know**
## General recommendations
* On any platform where you are • presenting your idea or Project to the public you must include; a Legal Advice, a Cookies Policy (if applicable) and a Privacy Policy (if you collect personal data through the platform – website, app, landing page, form, etc)
* When you use a form to collect personal data you must include a checkbox that links to your Privacy Policy that can be marked or clicked, and indicates something similar to, “I have read and I accept the Privacy Policy”.
Even if you do not collect personal data but you collect corporate or
companies information and this could be considered as a confidential
information, you should consider to implement the security measures
recommended in the following pages to protect this information
## Security Measures recommended
* Usual measures for normal information:
* Protocol specifying the backup procedures of the information collected
* Systems controlled by users with controlled access, with properly defined profiles and roles
* Use of programs that comply with data protection protection regulation or that ensure confidentiality or security of this type of information
* Inventory of computers, tools, programs and systems used
* Inventory of users that can access to the Project
* Protocol where is described the way to give Access to the systems of the Project; ways to change roles or Access permits; ways to remove Users, among others
* Password login must change almost once a year
* Procedures for the exercise of rights of users/clients
* Procedures to register any important incident with data and to communicate it to the competent authority or user afected
## Security Measures
* Extra measures for special category of data:
* Register that oversees or supervises who can or has accessed to the systems and programs and which actions has done
* Register to write the movements inside/outside the company or Project with information with special category of data
* Special and technical measures to protect special category of data like encryption, double opt-in to Access to special information
## Some considerations about encryption •
* **Mandatory encryption:**
* When imposed by law like in the treatment or collection of data considered as a special category
* When a Code of Conduct or of a • sector requires it
* When PIA or EIPD recommends it
* To minimize or solve risks
* **Convenient encryption:**
* Security breaches
* **Voluntary encryption:**
* Dissociated information or when it applies art. 11 GDPR
**Particular cases when encryption is required:**
* When it is needed for the risk that implies to treat this type of information or the meaning of a breach of this information
* To avoid the needs to inform about a security breach
**Encryption is mandatory in the following sectors:**
* Health
* Prevention of money laundering
* Telecommunications
* National Defense or National Security
* Legal issues
* Journalism and press (to prevent filtration of information resources)
* To protect Intellectual Property and knowhow of companies
What technical and physical measures do we need to consider?
When considering **physical security,** you should look at factors such as:
* A lot of security incidents can be due to the • The quality of doors and locks, and theft or loss of equipment, the abandonment the protection of your premises by of old computers, or hard-copy records being lost, stolen or incorrectly disposed of. such means as alarms, security Therefore security measures include both lighting or CCTV.
physical and technical protection for • How you control access to your
computer and IT security. premises and how visitors are
supervised.
_ https://ico.org.uk/for-organisations/guide-to-data- _ • How you dispose of
any paper waste _ protection/guide-to-the-general-data-protection-regulation-
_ and electronic waste.
_ gdpr/security/ _ • How you keep IT equipment,
particularly mobile devices, secure.
What technical and physical measures do we need to consider?
When considering cybersecurity, you should look at factors such as:
* System security – the security of your network and information systems, including those which process personal data.
* Data security – the security of the data you hold within your systems, eg ensuring appropriate access controls are in place and that data is held securely.
* Online security – the security of your website and any other online service or application that you use.
* Device security – including policies on Bringyour-own-Device (BYOD) if you offer this.
**_More info:_ **
_https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-
general-data-protectionregulation-gdpr/security/_
## FAQs
* It is obliged to implement data protection measures always when we collect data?
* Not, only if we collect personal data, we must implement this type of measures and measures will depend on the risks and categories of data (special or not)
* However, it could be necessary to take measures in order to protect, not personal data; when information collected is confidential, for example
* Have we to sign a Data Processor agreement with every Provider we have?
* Not, only if Provider has access or treats personal data. If a Provider only can access to the building or premises of the company not working with data, only will be recommended to sign a letter of compromise of confidentiality
* It is possible to collect data without informing user affected for the treatment?
* Not, user always must be informed, and we need the express consent unless law establishes another requirements or possibilities
* It is not permitted a not express consent
* Are all measures and preventions explained in this document?
* Not, only are described general aspects (please review with DPO or Ethics Mentor the impact your situation can mean for the project)
**Note 1: Data Processors:**
Data Processors are providers that provide a service that means a treatment of
Personal Data. These providers shall follow concrete security measures like
they were the Data Responsible because they are working with data that belongs
to the Responsible.
It is mandatory to sign an agreement between Data Processor and Data
Responsible that regulates the provision of the services in this sense. This
agreement must include type of Access to data by the Provider, type of Data
the Provider can Access and security measures implemented to comply with the
corresponding obligations. **Type of obligations of the Data Controller:**
* To follow the instructions of Responsible on the treatment of data when it is providing the service
* To implement the same security measures as the Responsible when the provision of services means working with the same information than the Responsible and see which categories of data are
processed (sensible or special data according to
GPDR)
**Note 2: Sharing information**
It is important to consider when two or more parties are sharing personal
information:
1. To review the legitimation to achieve or collect this information. Is the user or customer informed about the personal data treatment and who is collecting her/his data?
2. Company/Companies that collect this information, if they are more than one, have sign an agreement between parties?
3. Do they share the information because both are working in the same Project and could be co-responsibles of the treatment or do they share information to foreign or third parties?
4. Do they have taken the corresponding security measures?
5. The Sharing of this information means to use platforms to share information that imply a treatment or management of data outside the European Union? In this case it is mandatory to analyse this situation, in particular.
## Summary of Terms
* **Anonymization:** Is a type of _ information sanitization _ whose intent is _privacy protection_ . It is the process of either _ encrypting _ or removing _personally identifiable_ _information_ from _data sets_ , so that the people whom the data describe remain _anonymous_ . The _European Union_ 's new _ General Data Protection Regulation _ demands that stored data on people in the EU undergo either an anonymization or a _ pseudonymization _ process (source: Wikipedia).
* **Pseudonymization:** is a _ data management _ and _ de identification _ procedure by which _personally identifiable_ _information_ fields within a _ data _ record are replaced by on e or more artificial identifiers, or _pseudonyms_ . A single pseudonym for each replaced field or collection of replaced fields makes the data record less identifiable while remaining suitable for _ data analysis _ and _data processing_ . Pseudonymization (or pseudonymisation) can be one way to comply with the _European Union_ 's new _General Data_ _Protection Regulation_ demands for secure data storag e of personal information. Pseudonymized data can be restored to its original state with the addition of information which then allows individuals to be re-identified, while anonymized data can never be restored to its original state (source: Wikipedia).
* **Cybersecurity:** This is a complex technical area that is constantly evolving, with new threats and vulnerabilities always emerging. It may therefore be sensible to assume that your systems are vulnerable and take steps to protect them (source: ICO.UK).
* **Encryption:** Is the process of encoding a message or information in such a way that only authorized parties can access it and those who are not authorized cannot. Encryption does not itself prevent interference, but denies the intelligible content to a would-be interceptor. In an encryption scheme, the intended information or message, referred to as _plaintext_ , is encrypted using an encryption algorithm – a _ cipher _ – generating _ ciphertext _ that can be read only if decrypted. For technical reasons, an encryption scheme usually uses a _ pseudo-random _ encryption key generated by an algorithm. It is in principle possible to decrypt the message without possessing the key, but, for a well-designed encryption scheme, considerable computational resources and skills are required. An authorized recipient can easily decrypt the message with the _ key _ provided by the originator to recipients but not to unauthorized users (source: Wikipedia).
* **PIA:** Privacy Impact Assessment. Is an analysis which assists organizations in identifying and managing the privacy risks arising from new projects, initiatives, systems, processes, strategies, policies, business relationships. This normally must be done when starting a new business that implies treatment of personal data; or when a company opens a new branch or service that implies a new treatment of data.
* **EIPD or DPIA:** Is a Privacy impact assessment to help companies to identify and minimize the data protection risks of a project in any moment is needed.
* More information about Best
Practices in Data and Confidentiality world:
* About DPIA/PIAs:
_https://ico.org.uk/fororganisations/guide-to-dataprotection/guide-to-the-
generaldata-protection-regulationgdpr/accountability-andgovernance/data-
protectionimpact-assessments/_
* About reports in GPDR/Data Protection:
_https://edpb.europa.eu/aboutedpb/board/annual-reports_en_
* About erasure:
* _https://ec.europa.eu/info/law/lawtopic/data-protection/reform/rulesbusiness-and-organisations/dealingcitizens/do-we-always-have-deletepersonal-data-if-person-asks_en_
* _https://ico.org.uk/fororganisations/guide-to-dataprotection/guide-to-the-general-dataprotection-regulationgdpr/individual-rights/right-toerasure/_
* About delete information in accordance with GDPR:
* _https://ico.org.uk/media/fororganisations/documents/1475/deleti ng_personal_data.pdf_
TITLE MEETING
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1337_PortForward_769267.md
|
**Executive Summary**
This document describes the data the project will generate and how it will be
produced and analyzed. It also aims to detail how the data related to the
PortForward project will be disseminated and afterwards shared and preserved.
# 1 Introduction
This document introduces the second version of the project Data Management
Plan (DMP).
PortForward participates in the Open Research Data Pilot (ORD pilot), through
which the European Commission aims to improve and maximize access and reuse of
research data generated by Horizon 2020 projects. The ORD pilot considers the
need to balance openness and protection of scientific information,
commercialization and Intellectual Property Rights (IPR), privacy concerns,
security as well as data management and preservation questions.
The DMP describes the data management life cycle for the data to be collected,
processed and/or generated by the project. The PortForward DMP provides an
analysis of the main elements of the data management policy that will be used
by the consortium regarding all the datasets that will be generated by the
project. It ensures that the research data will be findable, accessible,
interoperable and re-usable (FAIR).
It also lists the different datasets that will be used, collected and
generated by the project, the main exploitation perspectives of these
datasets, and the major management principles the project will implement to
handle these.
The DMP is not a fixed but rather a living document that will evolve through
the lifespan of the project. This first version of the DMP includes an
overview of the datasets to be produced by the project and the specific
conditions that are attached to them. The DMP will cover the complete data
life cycle.
# 2 Data summary
All PortForward partners have identified the datasets that will be produced
during the different phases of the project. The following table provides a
summary of these datasets, including the associated WP and the name of the
partner responsible for each one.
**Table 1 Overview of PortForward datasets**
<table>
<tr>
<th>
**Dataset No.**
</th>
<th>
**Dataset Name**
</th>
<th>
**Lead**
</th>
<th>
**Associated**
**WP**
</th> </tr>
<tr>
<td>
**1**
</td>
<td>
User expectations and goals
</td>
<td>
ACCIONA
</td>
<td>
WP1
</td> </tr>
<tr>
<td>
**2**
</td>
<td>
Use case restrictions
</td>
<td>
Vigo
</td>
<td>
WP1
</td> </tr>
<tr>
<td>
**3**
</td>
<td>
Technical specifications
</td>
<td>
IFF/ LEITAT
</td>
<td>
WP1
</td> </tr>
<tr>
<td>
**4**
</td>
<td>
KPI catalogue
</td>
<td>
MARTE
</td>
<td>
WP1
</td> </tr>
<tr>
<td>
**5**
</td>
<td>
System architecture incl. existing port systems
</td>
<td>
LEITAT
</td>
<td>
WP2
</td> </tr>
<tr>
<td>
**6**
</td>
<td>
People and assets tracking data
</td>
<td>
LEITAT
</td>
<td>
WP4
</td> </tr>
<tr>
<td>
**7**
</td>
<td>
AR Remote assistance data
</td>
<td>
UBIMAX
</td>
<td>
WP4
</td> </tr>
<tr>
<td>
**8**
</td>
<td>
AR pilot assistance data
</td>
<td>
UBIMAX
</td>
<td>
WP4
</td> </tr>
<tr>
<td>
**9**
</td>
<td>
Stowage optimization data
</td>
<td>
ACCIONA
</td>
<td>
WP4
</td> </tr>
<tr>
<td>
**Dataset No.**
</td>
<td>
**Dataset Name**
</td>
<td>
**Lead**
</td>
<td>
**Associated**
**WP**
</td> </tr>
<tr>
<td>
**10**
</td>
<td>
Mafis working hours optimization
</td>
<td>
ACCIONA
</td>
<td>
WP4
</td> </tr>
<tr>
<td>
**11**
</td>
<td>
Sustainability assessment data
</td>
<td>
LEITAT
</td>
<td>
WP5
</td> </tr>
<tr>
<td>
**12**
</td>
<td>
Yard operations data
</td>
<td>
BRUNEL
</td>
<td>
WP5
</td> </tr>
<tr>
<td>
**13**
</td>
<td>
Digital twin (3D model, process and sensor data)
</td>
<td>
IFF
</td>
<td>
WP6
</td> </tr>
<tr>
<td>
**14**
</td>
<td>
Decision support system
</td>
<td>
IFF
</td>
<td>
WP6
</td> </tr>
<tr>
<td>
**15**
</td>
<td>
Stakeholder survey data
</td>
<td>
MARTE
</td>
<td>
WP7
</td> </tr>
<tr>
<td>
**16**
</td>
<td>
Use case implementation and evaluation data
</td>
<td>
MARTE/ KRISTIANSAND
</td>
<td>
WP7
</td> </tr>
<tr>
<td>
**17**
</td>
<td>
Standardization
</td>
<td>
IFF
</td>
<td>
WP8
</td> </tr>
<tr>
<td>
**18**
</td>
<td>
Technology scouting and technology watch data
</td>
<td>
CORE
</td>
<td>
WP8, WP9
</td> </tr>
<tr>
<td>
**19**
</td>
<td>
Market, competition and customer data
</td>
<td>
MARTE
</td>
<td>
WP8
</td> </tr>
<tr>
<td>
**20**
</td>
<td>
Business model canvas elements
</td>
<td>
CORE
</td>
<td>
WP8
</td> </tr>
<tr>
<td>
**21**
</td>
<td>
IPR catalogue
</td>
<td>
CORE
</td>
<td>
WP9
</td> </tr>
<tr>
<td>
**22**
</td>
<td>
IoT device/network communication and localization data
</td>
<td>
IMEC
</td>
<td>
WP3, WP4
</td> </tr> </table>
In order to collect all relevant information for each dataset, the following
table was provided to the consortium partners. One table was generated for
each dataset. The topics addressed in the table include all relevant
information, such as dataset responsible partner, use of metadata, definition
of data formats, provisions for making the data FAIR, security and ethical
aspects. All the tables with the detailed information on each dataset are
provided in the annex of this deliverable.
**Table 2 Form for collection of information on each dataset**
<table>
<tr>
<th>
DATASET NAME
</th> </tr>
<tr>
<td>
Data Identification
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
</td> </tr>
<tr>
<td>
Source
</td>
<td>
</td> </tr>
<tr>
<td>
Partners activities and responsibilities
</td> </tr>
<tr>
<td>
Partner owner of the data; copyright holder (if applicable)
</td>
<td>
</td> </tr> </table>
<table>
<tr>
<th>
DATASET NAME
</th> </tr>
<tr>
<td>
Partner in charge of data collection
</td>
<td>
</td> </tr>
<tr>
<td>
Partner in charge of data analysis
</td>
<td>
</td> </tr>
<tr>
<td>
Partner in charge of data storage
</td>
<td>
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
</td> </tr>
<tr>
<td>
Standards
</td> </tr>
<tr>
<td>
Info about metadata (production and storage dates, places) and documentation?
</td>
<td>
</td> </tr>
<tr>
<td>
Standards, format, estimated volume of data
</td>
<td>
</td> </tr>
<tr>
<td>
Data exploitation and sharing
</td> </tr>
<tr>
<td>
Data exploitation (purpose/use of the data analysis)
</td>
<td>
</td> </tr>
<tr>
<td>
Data access policy / Dissemination level: confidential (only for members of
the Consortium and the Commission Services) or Public
</td>
<td>
</td> </tr>
<tr>
<td>
Data sharing, re-use, distribution, publication (How?)
</td>
<td>
</td> </tr>
<tr>
<td>
Embargo periods (if any)
</td>
<td>
</td> </tr>
<tr>
<td>
Legal framework(s) regulating or affecting in any way this dataset. How is
this addressed?
</td>
<td>
</td> </tr>
<tr>
<td>
Personal data protection: are they personal data? If so, have you gained
(written) consent from data subjects to collect this information?
</td>
<td>
</td> </tr>
<tr>
<td>
Archiving and preservation (including storage and backup)
</td> </tr>
<tr>
<td>
Data storage (including backup): Where? For how long?
</td>
<td>
</td> </tr> </table>
# 3 FAIR data
PortForward aims at generating FAIR data, i.e. data that is findable,
accessible, interoperable and reusable.
### 3.1 Making data findable, including provisions for metadata
In order to make data findable, metadata will be used. All partners have
agreed in providing relevant metadata and keywords, so that their data will be
easily discoverable. Clear version numbers will be included (automated process
through the project repository) and standard naming conventions will be
defined.
### 3.2 Making data openly accessible
The consortium partners already identified which data will be made openly
available and which cannot be shared (or need to be shared under restriction),
including the reason why access is restricted in the latter case.
In the next DMP versions, more details will be provided regarding the
accessibility of the data. Details on the repository, the methods and tools
necessary to access the data will be included in future versions of the DMP.
The consortium aims at using the project website (or other easily accessible
repositories) as repository for open accessible (public) data, ensuring easy
access to anyone interested.
### 3.3 Making data interoperable
Provisions are also taken to make data interoperable, making it easier to
exchange and re-use them across research institutions, organisations, etc.
This could be more difficult for project-specific technical datasets, but the
project strives in making all open data interoperable. Interoperability for
technical data is also addressed in WP2, T2.3 “Interoperability & data
modelling”. The goal of this task is to define a generic model of data
curation.
### 3.4 Increase data re-use (through clarifying licences)
Information regarding the reusability of the data will be provided for each
dataset separately. The consortium partners provided relevant information on
embargo periods and the intended period that their data will remain reusable.
# 4 Allocation of resources
Costs for making the project data FAIR are eligible as part of the Horizon
2020 grant. At this current stage, no such costs are foreseeable. In the
future DMP versions this chapter will be updated and include costs for long-
term preservation of data (after the project end).
# 5 Data security
During the implementation of the PortForward project, the consortium members
will collect data in various forms, e.g. pen and paper, photos, videos,
electronic documents. For the purpose of the project documentation, each
partner will store this data individually. For this, the respective
organizational rules and regulations of each partner with respect to data
storage and security apply.
For the cooperation between partners in the consortium, the coordinator
Fraunhofer Gesellschaft established a dedicated document management platform,
based on OpenText Content Server. Data relevant to multiple partners will be
stored and retained on this infrastructure. The content server infrastructure
underlies the data protection responsibilities of the Fraunhofer Gesellschaft.
Storing of personal data will only occur with explicit prior informed consent
of subjects, based on the informed consent procedures as laid out in D11.1.
# 6 Ethical aspects
Regarding the ethical and legal issues impacting data sharing, certain
provisions have already been implemented in the project ethics work package
(WP11). D11.1 provides information on the stakeholder involvement, selection
and recruitment as well as the informed consent procedure. In D11.2,
information on the compliance of the consortium towards the collection of
personal data and their handling over the life cycle of the project is
provided.
# 7 Next steps
The PortForward partners in a yearly basis will update the DMP. The next
versions of the DMP will be delivered on December 2020 (D9.7) and December
2021 (D9.8). All deliverables are considered public and will be uploaded on
the project’s website after being officially approved.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1340_COUCH_769553.md
|
## 1 Introduction
This Data Management Plan (DMP) is the second deliverable in the first Work
Package (WP) of the Council of Coaches project. It will describe how the data
will be handled during and after the project. The focus during the project
will be on providing privacy for the personalised and medical data that will
be used. After the project has ended the focus will shift towards making the
data as openly accessible as possible. By means of anonymising the
personalised data, the consortium will try to open up as much data as
possible. However, since privacy of the patients comes first, we foresee this
will not be possible for all datasets.
In May 2018, the new European Regulation on Privacy, the General Data
Protection Regulation, (GDPR) will come into effect. In this DMP we will
describe the measures to protect the privacy of all subjects in the light of
the GDPR. We will not describe any measures on a national level before May
2018, since we the little personal data we have before May 2018, will be
stored in the Netherlands at a location that is already GDPR compliant.
This DMP is a living document. At this stage in the research a lot of
questions concerning the data are still open for discussion. Questions
concerning opening up the data or answers to questions related to the
Findable, Accessible, Interoperable, Re-use (FAIR) principles will only have a
provisional answer in this DMP. We will add relevant information to the DMP as
soon as it is available. An update will be provided before the end of the
first period, in time for the first review. Another update will be provided
before the end of the project in order to describe how the data will be made
open access.
In this document, firstly objectives of the document will be described.
Secondly, the relationship between the GDPR and the implementation of data
management within the Council of Coaches project will be discussed. In the
chapter 4 guidelines will be provided for the partners within the consortium
on working with datasets that contain personal information. In the last
chapters each WP will describe the dataset that will be collected or generated
within the WP. Furthermore the project management will describe the intentions
we have with the datasets on following the FAIR principles and opening the
datasets.
## 2 Objectives
In this deliverable the data management of the Council of Coaches Project will
be described.
Discussions on Privacy by Design and Open data will be ongoing during the
project.
Our aim is for the DMP to not only be a description of the datasets within the
Council of Coaches project, but for the DMP to serve two additional goals:
On the one hand, it will provide guidelines for data management to all
partners in the project.
On the other hand, it can serve as a tool to create awareness on the different
topics of Open Access, Privacy and Personal Data, and the FAIR principles.
This will help the participants to make as much of the research data Openly
Accessible as possible, while staying within the boundaries of the Privacy
regulations.
## 3 Data Management and the GDPR
As of May 2018 the GDPR will come into play. This means all partners within
the consortium will have to follow the same new rules and principles. On the
one hand it makes it easier for the project management to set up guidelines
for the correct use of personal data. On the other hand it means that in some
cases, tools and partner specific guidelines are not yet available.
In this chapter we will describe how the founding principles of the GDPR will
be followed in the Council of Coaches project. In the next chapter we will set
out specific guidelines for proper use of personal data within the boundaries
of the GDPR.
### Lawfulness, fairness and transparency
_Personal data shall be processed lawfully, fairly and in a transparent manner
in relation to the data subject._
The council of coaches describes all handling of personal data in this DMP.
Some of the answers requested can at the moment of writing not be provided
for. Therefore we chose to let the DMP be a living document. As soon as
information about data sets become available, this will be updated in the DMP.
Furthermore additional updates will be provided to the Project Officer, before
both reviews.
All data gathering from individuals will require informed consent of the test
subjects, patients, or other individuals who are engaged in the project.
Informed consent requests will consist of an information letter and a consent
form. This will state the specific causes for the experiment (or other
activity), how the data will be handled, safely stored, and shared. The
request will also inform individuals of their rights to have data updated or
removed, and the project’s policies on how these rights are managed.(see
below).
We will try to anonymise the personal data as far as possible, however we
foresee this won’t be possible for all instances. Therefore further consent
will be asked to use the data for open research purposes, this includes
presentations at conferences, publications in journals as well as depositing a
data set in an open repository at the end of the project.
The consortium tries to be as transparent as possible in their collection of
personal data. This means when collecting the data information leaflet and
consent form will describe the kind of information, the manner in which it
will be collected and processed, if, how, and for which purpose it will be
disseminated and if and how it will be made open access. Furthermore, the
subjects will have the possibility to request what kind of information has
been stored about them and they can request up to a reasonable limit to be
removed from the results.
### Purpose limitation
_Personal data shall be collected for specified, explicit and legitimate
purposes and not further processed in a manner that is incompatible with those
purposes_
The Council of Coaches project won’t collect any data that is outside the
scope of the project. Each researcher will only collect data necessary within
their specific work package.
### Data minimisation
_Personal data shall be adequate, relevant and limited to what is necessary in
relation to the purposes for which they are processed_
Only data that is relevant for the project’s research questions and the
required coaching strategies will be collected. However since patients are
free in their answers, both when working with the Council of Coaches
technology or in answering open ended research questions, this could result in
them sharing personal information that has not been asked for by the project.
This is normal in any coaching relationship and we therefore chose not to
limit the patients in their answer possibilities. Since this data can be
highly personal, it will be treated according to all guidelines on personal
data and won’t be shared without anonymization or explicit consent of the
patient.
### Accuracy
_Personal data shall be accurate and, where necessary, kept up to date._
All data collected will be checked for consistency. However since some of the
dataset register selfreporting data from the patients, we cannot check this
data for accuracy. Since all data is gathered within a specific timeframe, we
chose not to keep the data up to date, since it would hinder our research.
However, we will try to capture the data as accurately as possible, for
example “age” could be stored as “age in 2018”. This will remove the necessity
of keeping this information up to date.
### Storage limitation
_Personal data shall be kept in a form which permits identification of data
subjects for no longer than is necessary for the purposes for which the
personal data are processed_
All personal data that will no longer be used for research purposes will be
deleted as soon as possible. All personal data will be made anonymous as soon
as possible. At the end of the project, if the data has been anonymised, the
data set will be stored in an open repository. If data cannot be made
anonymous, it will be pseudonymised as much as possible and stored for a
maximum of the partner’s archiving rules within the institution. A complete
data set will be stored at the UT for project archiving for 10 years,
according to UT’s data policy.
### Integrity and confidentiality
_Personal data shall be processed in a manner that ensures appropriate
security of the personal data, including protection against unauthorised or
unlawful processing and against accidental loss, destruction or damage, using
appropriate technical or organisational measures_
All personal data will be handled with appropriate security measures applied.
This means:
* Data sets with personal data will be stored at a SharePoint server at the UT that complies with all GDPR regulations and is ISO 27001 certified.
* Access to this SharePoint will be managed by the project management and will be given only to people who need to access the data. Access can be retracted if necessary.
* Data sets with personal information could further be shared through the Council of Coaches Dropbox folder, only if the dataset are sufficiently encrypted. The key to the encryption will be handed out by the project management and will be changed when access needs to be revoked.
* All people with access to the personal data files will need to sign a confidentiality agreement.
* These data files cannot be copied, unless stored encrypted on a password protected storage device. In case of theft or loss, these files will be protected by the encryption.
* These copies must be deleted as soon as possible and cannot be shared with anyone outside the consortium or within the consortium without the proper authorization.
In exceptional cases where the dataset is too large, or it cannot be
transferred securely, each partner can share their own datasets through
channels that comply with the GDPR.
### Accountability
_The controller shall be responsible for, and be able to demonstrate
compliance with the GDPR._
At project level, the project management is responsible for the correct data
management within the project. In the next chapter guidelines will be
described for each partner to follow in case of datasets with personal data.
Whether the partners follow these guidelines will be regularly checked by the
project management. For each data set, a responsible person has been appointed
at partner level, who will be held accountable for this specific data set.
Each researcher will need to make a mention of a dataset with personal
information to their Privacy Protection Officer, in line with the GDPR
regulations.
**4 Guidelines for data management on project level.**
Data management is an ongoing process. The first version of the DMP will be
published at month 6, when there are still many uncertainties about the data
collected in the project. For many tasks, due to the nature of the research,
we do not know what kind of data we will collect specifically. This results in
not having a good overview of whether this data will contain personal data (or
data that can be combined into personal data) or confidential parts of for
example software. Therefore we have established guidelines for data management
to ensure that all researchers will keep up the principles of lawful and
ethical data management and end users will be able to trust the system with
their personal data.
The guidelines established in this DMP are embraced within the consortium and
the project management will ensure these principles will be followed.
### Guidelines
The sections below describe the ten basic guidelines that will be adopted in
the Data Management process in Council of Coaches.
#### 4.1.1 Purpose limitation and data minimisation
As soon as a researcher has identified which information to collect, the
principles of purpose limitation and data minimisation come into play. Each
researcher will take care not to collect any data that is outside the scope of
his or her research and will not collect additional information not directly
related to the goal of his research.
#### 4.1.2 Personal information
As soon as the parameters in the data set are identified, the researchers need
to indicate whether the data set will contain personal information.
In cases where the parameters themselves contains no personal information, but
the various parameters can be merged to show a distinct pattern that can be
linked to a specific person, the data set will be classified as containing
personal information as well.
When the dataset contains personal information or otherwise information that
needs to be kept confidential, the following privacy principles should be
taken into account:
Sensitive data should be stored at either the dedicated SharePoint server at
the UT, or encrypted on Dropbox. Preferably the SharePoint, since access
management will be better implemented. However for short terms with a limit
amount of users, the Dropbox option is a viable one.
In the case of personal data collected in physical form (e.g. on paper), it
shall be stored in a restrictedaccess area (e.g. locked drawer) where only
relevant staff has access to. One the data has been digitised, the physical
copies shall be removed.
Personal data should be deleted as soon as possible.
#### 4.1.3 Anonymization and pseudonymisation
The researcher will make sure the personal data is anonymised as quickly as
possible. When the data cannot be anonymised completely, it will be
pseudonymised as much as possible. The key between the pseudonymised file and
the list of participants will be stored by the project management on a
separate physical location from the original files. Do keep in mind that the
research subjects should be able to withdraw their data within 48 hours of the
experiment.
Part of the technology to be developed is a client server technology. Both the
client and the server should incorporate the privacy rules as set out in the
GDPR as of May 2018. At the moment we are looking into the different
possibilities of hosting a server at either RRD, UT, or UPV to be sure we will
abide by the rules. As far as client side technology, we are looking into the
possibilities of anonymizing the client side, so the phone on which the app
runs, won’t be serving as a unique identifier for the project.
However, the implications of the privacy-by-design provisions in the GDPR
cannot be settled up front. As part of the Responsible Research Innovation
work in WP 2, we will incorporate therefore work to include privacy by design
techniques in all relevant development phases, ensuring that privacy will be
maintained on all levels for our patients.
#### 4.1.4 Informed consent
When collecting personal information, researchers are required to get informed
consent from the patients. The standardized EU informed consent form template
is used see Annex 1. However this can always be supplemented with additional
consent requests. For the online collection of data for the final prototype,
we are looking into a privacy by design solution where the consent is an
integral part of the technical development of the system.
Dissemination of this (personal) information for scientific conferences,
journal papers or other dissemination items may only occur with direct consent
of the patients or if the data is completely anonymised.
#### 4.1.5 End users’ access to data
The user can submit a request to see which information about him is being kept
on our files through the contact person on the consent form. He can request to
delete his information up to 48 hours after the experiment has taken place.
Furthermore he can request that no additional data collection of the patient
will take place starting immediately from the time of request.
#### 4.1.6 Storage and researchers’ access to data
Personal data will need to be stored safely and in a secure environment. This
can either be the SharePoint server hosted by the UT, or a partner’s own
solution that complies with the GDPR. For short term transfer of data, an
encrypted file might be put on Dropbox. The researcher is responsible for the
correct encryption and the access management of the encryption key.
Access to this secure environment can be granted or revoked by either the
researchers responsible for the data, or the project management on a case to
case basis and will not be given out by default to all researchers. All users
that are granted access to the SharePoint will need to sign a confidentiality
agreement about the data on the server. Access can be restricted or revoked,
when researchers are not complying with the guidelines or when their contract
is terminated.
Backups of Dropbox and the UT’s SharePoint are made every 24 hours by the
system itself.
The UT’s SharePoint server is a secure environment that is ISO 27001
certified.
#### 4.1.7 Encryption
When you want to share personal data files through Dropbox, the data files
will need to be encrypted. Each researcher is free to use their own preferred
encryption tools, to make the process as easily available as possible.
Possibilities for encryption as build in Word and Excel encryption or PGP
keys.
If you keep data files with personal data on your personal computer or on a
separated hard drive for data analysis purposes, you can use BitLocker of
FileVault for the encryption of your hard drive.
**4.1.8 Open data and FAIR principles.**
Within the Council of Coaches project, we endorse the EC’s motto: to make the
data as open as possible, but as closed as necessary. We are committed to
protect the privacy of the people involved, and the confidentiality of
specific results or agreements. In these cases the data will not be made
available for public use.
In all other cases we will try our best to make the research data as broadly
available as possible. This means the FAIR principles will be held, but at the
moment it is not possible for us to give definitive answers on how these will
be held. We intent to discuss those in more detail, once more information on
the data sets comes to light.
#### 4.1.9 Privacy statements
Actively communicate the privacy and security measures you take through all
media channels (from consent forms to websites) with a privacy statement. You
can adjust the statement to fit the target group, purpose, and level of
privacy.
#### 4.1.10 Update DMP
The DMP is a living document. The fact that at the moment there are still many
uncertainties about the data does not release us of the obligation to
ethically and lawfully collect, process, and store this data. All researchers
have the responsibility to keep the DMP up to date, so the DMP will reflect
the latest developments in data collection.
## 5 Data management per WP
The work package leaders have been asked to describe the different data sets
that will be used within their WP as well as possible. For the description of
the work packages the standard EC template for a data management plan has been
used. However, many questions concerning the FAIR principles cannot be
answered at this moment. Therefore we have specified provisional guidelines
concerning these principles below. If not otherwise specified in the Work
Package description, these provisional guidelines will for now apply to the
data set. Description in the Work Packages that deviate from these intentions
will be mentioned in the description of the work packages.
### Provisional Guidelines
#### 5.1.1 Overall
Since the consortium has no extensive detailed knowledge of Open Data and the
best practices of opening up the data set, we intend to collaborate with the
information specialists of the University of Twente. This department offers
specialised support for all aspects of data management including Open Access,
Open Data, Archiving and Data Management.
For secure storage we closely work together with the ICT department to keep up
with the latest technologies and rules and regulations regarding the GDPR.
If possible from a privacy point of view, it is our intention to make all the
above-mentioned written data openly available except those parts of the data
that pertain to practices and technologies covered by any secrecy clauses in
the consortium agreement or in the exploitation agreements reached within the
consortium or between the consortium and external parties.
#### 5.1.2 Findable
Each dataset will get a unique Digital Object Identifier (DOI).
Deliverable 1.1 on Quality, Risk, and IPR management has dictated the naming
conventions and versioning guidelines that will be used within the project.
When the data set will be stored in a trusted repository the name might be
adapted in order to make it better findable.
Keywords will be added in line with the content of the datasets and with
terminology used in the specific scientific fields to make the datasets
findable for different researchers.
#### 5.1.3 Accessible
As described before, our intention is to open up as many data sets as
possible. However if we cannot guarantee the privacy of the participants, the
data set might be opened up under a very restricted license or it will remain
completely closed.
All open data set will be stored in a trusted repository. At the moment we are
looking into DANS and 4TU Centre for research data.
DANS is a Data Seal of Approval (DSA) and World Data System (WDS) trusted
repository, based in the Netherlands. 1 In DANS access can be restricted,
ranging from Open Access, Open Access for registered users to restricted
access. For the final dataset the appropriate level of access will be chosen.
4TU Centre for Research Data is a DSA Trusted repository, also based in the
Netherlands 2 . The centre provides knowledge, experience and the tools to
archive research data in a standardized, secure and well-documented manner.
The data will be accessible here for the (restricted) public.
Furthermore datasets might be stored in partners’ repositories, and other
(inter)national trusted repositories with a Data Seal of Approval. The
definitive list will be added to the final version of the DMP.
If a dataset will be stored in a trusted repository with a limited access
license, a Data Access Committee will be set up at the end of the project.
They will decide on a case-to-case basis if access will be granted and for how
long.
#### 5.1.4 Interoperable
We are looking into suitable metadata standards, for example: DataCite 3 and
Dublin Core 4 .
Depending on the scientific field where the data set will originate from,
additional meta-data standards might be used.
#### 5.1.5 Re-Use
If possible, the data set will be licensed under an Open Access license.
However this will depend on the level of privacy, and the Intellectual
Property Right (IPR) involved in the data set.
A period of embargo will only be necessary if a data set contains specific IPR
or other exploitable results that will justify an embargo.
Our intention is to make as much data as possible re-useable for third
parties. Restriction will only apply when privacy, IPR, or other exploitations
ground are in play.
All data sets will be cleared of bad records, with clear naming conventions,
and with appropriate metadata conventions applied.
The length of time, the data sets will be stored will depend on the content of
the data set. For example if the data set contains medical practices that we
foresee will be replaced soon, these set won’t be stored for eternity.
Furthermore data sets with specific technological aspects, might become
outdated and we will apply an appropriate time for storage.
## 6 WP1 : Data for sound, effective and efficient project management
<table>
<tr>
<th>
DMP component Issues to be addressed
1\. Data summary
</th>
<th>
* State the purpose of the data collection/generation
* Explain the relation to the objectives of the project
* Specify the types and formats of data generated/collected
* Specify if existing data is being re-used (if any)
* Specify the origin of the data
* State the expected size of the data (if known)
* Outline the data utility: to whom will it be useful
</th> </tr>
<tr>
<td>
The purpose of the data collection in this
</td>
<td>
WP is sound, effective, and efficient project management
</td> </tr>
<tr>
<td>
to ensure the project is delivered on time,
</td>
<td>
within budget and with outstanding quality.
</td> </tr>
<tr>
<td>
The following datasets will be used for
</td>
<td>
project management purposes. These data will be kept
</td> </tr>
<tr>
<td>
confidential within the project, so we hav
</td>
<td>
e chosen the formats that were easiest for the partner to
</td> </tr>
<tr>
<td>
work with.
a.
</td>
<td>
Agenda and minutes of al
</td>
<td>
l meeting held. (.doc format)
</td> </tr>
<tr>
<td>
b.
</td>
<td>
Financial information of
</td>
<td>
the partners, as provided to the EC for the two periodic
</td> </tr>
<tr>
<td>
</td>
<td>
reports. (.xls/.doc/email f
</td>
<td>
ormat)
</td> </tr>
<tr>
<td>
c.
</td>
<td>
Templates of deliverables
</td>
<td>
, presentations, posters etc. (.doc/.ppt format)
</td> </tr>
<tr>
<td>
d.
</td>
<td>
Logo’s and other corpora jpg, png etc.)
</td>
<td>
te identity items. (several graphical formats like vector files,
</td> </tr>
<tr>
<td>
e.
</td>
<td>
All final deliverables and r
</td>
<td>
eports will be uploaded in the EC portal in a pdf format.
</td> </tr>
<tr>
<td>
No data is being re-used.
The data will be collected/generated by th
</td>
<td>
e project manager before during, or after project meetings
</td> </tr>
<tr>
<td>
as well as during periodical and financial r
</td>
<td>
eporting periods.
</td> </tr>
<tr>
<td>
Graphical elements will be generated b
</td>
<td>
y iSPRINT, in charge of dissemination and exploitation
</td> </tr>
<tr>
<td>
activities.
The data will probably not exceed 1 Gigab
</td>
<td>
yte (GB).
</td> </tr>
<tr>
<td>
The data will be useful for project manage
</td>
<td>
rs, project partners and where it concerns the deliverables
</td> </tr>
<tr>
<td>
and financial information generated durin
</td>
<td>
g periodical reports, to the EC.
</td> </tr>
<tr>
<td>
FAIR Data 2.1. Making data findable, including provisions for metadata
</td>
<td>
* Outline the discoverability of data (metadata provision)
* Outline the identifiability of data and refer to standard identification mechanism. Do you make use of persistent and unique identifiers such as Digital Object Identifiers?
* Outline naming conventions used
* Outline the approach towards search keyword
* Outline the approach for clear versioning
* Specify standards for metadata creation (if any). If there are no standards in your discipline describe what type of metadata will be created and how
</td> </tr>
<tr>
<td>
The DMP and periodic reporting and financial information will be shared with
the EC through the portal. This should make it accessible for the committee as
well as the consortium. No further
</td> </tr> </table>
<table>
<tr>
<th>
keywords will be provided and no further measures will be taken to improve the
discoverability of the data.
</th> </tr>
<tr>
<td>
2.2 Making data openly accessible
</td>
<td>
* Specify which data will be made openly available? If some data is kept closed provide rationale for doing so
* Specify how the data will be made available
* Specify what methods or software tools are needed to access the data? Is documentation about the software needed to access the data included? Is it possible to include the relevant software (e.g. in open source code)?
* Specify where the data and associated metadata, documentation and code are deposited
* Specify how access will be provided in case there are any restrictions
</td> </tr>
<tr>
<td>
Most of this data will not be made public, with exception to this Data
Management Plan, which will be publicised through the EC’s portal. Furthermore
the Periodic reports and financial information will be provided to the EC, but
are still confidential.
Therefore we believe it is not necessary to make this data findable, openly
accessible or otherwise future proof.
</td> </tr>
<tr>
<td>
2.3. Making data interoperable
</td>
<td>
* Assess the interoperability of your data. Specify what data and metadata vocabularies, standards or methodologies you will follow to facilitate interoperability.
* Specify whether you will be using standard vocabulary for all data types present in your data set, to allow inter-disciplinary interoperability? If not, will you provide mapping to more commonly used ontologies?
</td> </tr>
<tr>
<td>
No project management data will be made interoperable.
</td> </tr>
<tr>
<td>
2.4. Increase data re-use (through ▪ Specify how the data will be licenced to
permit the
clarifying licences) widest reuse possible
* Specify when the data will be made available for reuse. If applicable, specify why and for what period a data embargo is needed
* Specify whether the data produced and/or used in the project is useable by third parties, in particular after the end of the project? If the re-use of some data is restricted, explain why
* Describe data quality assurance processes
* Specify the length of time for which the data will remain re-usable
</td> </tr>
<tr>
<td>
No data re-use will be made possible for project management data.
</td> </tr>
<tr>
<td>
3\. Allocation of resources
</td>
<td>
* Estimate the costs for making your data FAIR.
Describe how you intend to cover these costs
* Clearly identify responsibilities for data management in your project
</td> </tr>
<tr>
<td>
</td>
<td>
▪ Describe costs and potential value of long term preservation
</td> </tr>
<tr>
<td>
No additional costs will be made for the project management data.
</td> </tr>
<tr>
<td>
4\. Data security
</td>
<td>
▪ Address data recovery as well as secure storage and transfer of sensitive
data
</td> </tr>
<tr>
<td>
Most PM data will be stored on the Dropbox folder, since the dataset contains
no personal data. Dropbox access is granted by the project management and can
be revoked at any time. Dropbox makes its own backups at least every 24 hours.
Any data that is sensitive will be stored on either the SharePoint server or
encrypted on Dropbox, in line with the project’s guidelines on personal data.
At the end of the project most project management data will be archived
through JOIN, the archiving system of the UT. It will be stored for 10 years.
</td> </tr>
<tr>
<td>
5\. Ethical aspects
</td>
<td>
▪ To be covered in the context of the ethics review, ethics section of
Description of Action (DoA) and ethics deliverables. Include references and
related technical aspects if not covered by the former
</td> </tr>
<tr>
<td>
None of the project management data is subject to ethical considerations,
outside of privacy. This aspect has been covered in the data security
questions.
</td> </tr>
<tr>
<td>
6\. Other
</td>
<td>
▪ Refer to other national/funder/sectorial/departmental procedures for data
management that you are using (if any)
</td> </tr>
<tr>
<td>
In line with the UT policy on archiving data, Project Management data will be
archived at the end of the project for a 10 year period in JOIN.
</td> </tr> </table>
## 7 WP2: Data for Responsible Research and Innovation
<table>
<tr>
<th>
DMP component Issues to be addressed
1\. Data summary
</th>
<th>
▪
▪
▪
</th>
<th>
State the purpose of the data collection/generation
Explain the relation to the objectives of the project Specify the types and
formats of data generated/collected
</th> </tr>
<tr>
<td>
</td>
<td>
▪
</td>
<td>
Specify if existing data is being re-used (if any)
</td> </tr>
<tr>
<td>
</td>
<td>
▪
</td>
<td>
Specify the origin of the data
</td> </tr>
<tr>
<td>
</td>
<td>
▪
</td>
<td>
State the expected size of the data (if known)
</td> </tr>
<tr>
<td>
</td>
<td>
▪
</td>
<td>
Outline the data utility: to whom will it be useful
</td> </tr>
<tr>
<td>
The purpose of the data collection in this
</td>
<td>
WP is
</td>
<td>
understanding opinions on societal responsibility and
</td> </tr>
<tr>
<td>
user needs in order to achieve the objecti
</td>
<td>
ve o
</td>
<td>
f ensuring the research and innovation process in the
</td> </tr>
<tr>
<td>
project follows the principles of Responsi
</td>
<td>
ble R
</td>
<td>
esearch and Innovation and to develop new tools and
</td> </tr>
<tr>
<td>
coaching methods.
The following datasets are being collected
</td>
<td>
:
</td>
<td>
</td> </tr>
<tr>
<td>
a. Notes and minutes of bra
</td>
<td>
insto
</td>
<td>
rms and workshops (.doc format)
</td> </tr>
<tr>
<td>
b. Recordings and notes fro
</td>
<td>
m int
</td>
<td>
erviews with stakeholders (.mps, .doc format)
</td> </tr>
<tr>
<td>
c. Transcribed notes/record
</td>
<td>
ings
</td>
<td>
or otherwise ‘cleaned up’ or categorised data. (.doc,
</td> </tr>
<tr>
<td>
.xls format)
The files are initially stored as word and
</td>
<td>
excel
</td>
<td>
files. If it is possible to anonymise them and can be
</td> </tr>
<tr>
<td>
used for open access, these files will be st
</td>
<td>
ored
</td>
<td>
in the equivalent Open Office format or as pdf.
</td> </tr>
<tr>
<td>
No data is being re-used. The data will b
</td>
<td>
e col
</td>
<td>
lected/generated by DBT and RRD before during, or
</td> </tr>
<tr>
<td>
after project meetings and through inter
</td>
<td>
views
</td>
<td>
with stakeholders. All data gathering will take place
</td> </tr>
<tr>
<td>
within the EU and by/from EU citizens.
The data will probably not exceed 2 GB,
</td>
<td>
wher
</td>
<td>
e the main part of the storage will be taken up by the
</td> </tr>
<tr>
<td>
recordings.
The data will be useful for other project p
</td>
<td>
artne
</td>
<td>
rs and in the future for other research and innovation
</td> </tr>
<tr>
<td>
groups or organizations developing virt
</td>
<td>
ual coaching applications and researchers in the field of
</td> </tr>
<tr>
<td>
Responsible Research and Innovation (RR Assessment (TA).
</td>
<td>
I), Science and Technology Studies (STS), and Technology
</td> </tr>
<tr>
<td>
FAIR Data 2.1. Making data findable, including provisions for metadata
</td>
<td>
* Outline the discoverability of data (metadata provision)
* Outline the identifiability of data and refer to standard identification mechanism. Do you make use of persistent and unique identifiers such as Digital Object Identifiers?
* Outline naming conventions used
* Outline the approach towards search keyword
* Outline the approach for clear versioning
* Specify standards for metadata creation (if any). If there are no standards in your discipline describe what type of metadata will be created and how
</td> </tr>
<tr>
<td>
At the moment, the following metadata will be created for the data files:
* Author
* Institutional affiliation
* Contact e-mail
</td> </tr> </table>
<table>
<tr>
<th>
* Alternative contact in the organizations
* Date of production
* Occasion of production
Further metadata might be added at the end of the project in line with meta
data conventions.
All data files will be named so as to reflect clearly their point of origin in
the Council of Coaches structure as well as their content. For instance,
brainstorming data from the RRI workshop in task 2.1 will be named “T2.1 – RRI
Workshop – Brainstorm results”.
No further deviations from the intended FAIR principles are foreseen at this
point.
</th> </tr>
<tr>
<td>
2.2 Making data openly accessible
</td>
<td>
* Specify which data will be made openly available? If some data is kept closed provide rationale for doing so
* Specify how the data will be made available
* Specify what methods or software tools are needed to access the data? Is documentation about the software needed to access the data included? Is it possible to include the relevant software (e.g. in open source code)?
* Specify where the data and associated metadata, documentation and code are deposited
* Specify how access will be provided in case there are any restrictions
</td> </tr>
<tr>
<td>
Depending on the answers the subjects will give, the dataset might contain
personal information. The answers might be connected to the subject’s age,
gender, professional position, etc. There may be a conflict here with the need
for pseudonymisation, because with such a small group of respondents it will
be very easy to connect specific responses to specific respondents by
triangulating with e.g. social media profiles.
If it turns out the dataset does contain personal information, than it will be
treated in line with the project’s guidelines.
Open Office should be sufficient to open the document and spreadsheet files.
We foresee no restrictions to the dataset, if and when completely anonymised.
No further deviations from the intended FAIR principles are foreseen at this
point.
</td> </tr>
<tr>
<td>
2.3. Making data interoperable
</td>
<td>
* Assess the interoperability of your data. Specify what data and metadata vocabularies, standards or methodologies you will follow to facilitate interoperability.
* Specify whether you will be using standard vocabulary for all data types present in your data set, to allow inter-disciplinary interoperability? If not, will you provide mapping to more commonly used ontologies?
</td> </tr>
<tr>
<td>
By storing the data in Open Office format, these data files can be read by
commercial administrative tools, like Microsoft Office as well. In case MP3
files will be recorded, these are universal and can be played through multiple
software tools.
</td> </tr> </table>
<table>
<tr>
<th>
The collected data will be ordered so as to make clear the relationship
between questions being asked and answers being given. It will also be clear
to which category the different respondents belong (consortium members,
external stakeholder).
The data will use common social science data collection practice. One
potential deviation in terms of privacy has to do with whether the answers
will contain personal information.
No further deviations from the intended FAIR principles are foreseen at this
point.
</th> </tr>
<tr>
<td>
2.4. Increase data re-use (through ▪ Specify how the data will be licenced to
permit the
clarifying licences) widest reuse possible
* Specify when the data will be made available for reuse. If applicable, specify why and for what period a data embargo is needed
* Specify whether the data produced and/or used in the project is useable by third parties, in particular after the end of the project? If the re-use of some data is restricted, explain why
* Describe data quality assurance processes
* Specify the length of time for which the data will remain re-usable
</td> </tr>
<tr>
<td>
The data will probably be stored in a trusted repository with an Open Access
license. At the moment, there is no intention for patenting the information.
By posting the data in an open repository with the Data Seal of Approval, we
will ensure that the data will be made available for re-use. Only the final
data set will be submitted in the repository.
No further deviations from the intended FAIR principles are foreseen at this
point.
</td> </tr>
<tr>
<td>
3\. Allocation of resources
</td>
<td>
* Estimate the costs for making your data FAIR.
Describe how you intend to cover these costs
* Clearly identify responsibilities for data management in your project
* Describe costs and potential value of long term preservation
</td> </tr>
<tr>
<td>
The work to be done in making the data FAIR will be covered by the ordinary
working budget for producing the deliverables. DBT’s project manager Rasmus
Øjvind Nielsen will be responsible for the data management for this purpose.
</td> </tr>
<tr>
<td>
4\. Data security
</td>
<td>
▪ Address data recovery as well as secure storage and transfer of sensitive
data
</td> </tr>
<tr>
<td>
Workshop and interview data will be gathered in the form of notes and audio
recordings.
Audio recordings and handwritten notes will be stored under lock in the
offices of the DBT in a physical storage space separate from the participant
lists of workshops and interviewees.
Audio recordings and handwritten notes (e.g. Post-its) will be destroyed once
they have been added to the machine-written notes from the workshops or
interviews. In cases where audio recordings or handwritten notes are never
added to the machine-written notes, they will be destroyed in any case no
later than the end of the Council of Coaches project.
Machine-written notes (i.e. data files in word or excel format) will be stored
in a SharePoint space provided by Twente University. Access will be granted in
line with the project’s procedures.
</td> </tr>
<tr>
<td>
5\. Ethical aspects
</td>
<td>
▪ To be covered in the context of the ethics review, ethics section of DoA and
ethics deliverables. Include references and related technical aspects if
not covered by the former
</td> </tr>
<tr>
<td>
All workshop participants and interviewees will be asked to sign a consent
form giving consent to use of the data in the Council of Coaches project’s
analyses and for the sharing of the data with others through FAIR measures.
Consent for the two different uses will be specific. Consenting to the use of
collected data for the purposes of the Council of Coaches project’s analyses
will be a mandatory requirement for workshop participation and the conduct of
interviews. In cases of non-consent to FAIR use of the data, the statements
produced by the person in question will be marked with a nonpersonal marker
and eliminated from the dataset before publication.
When during data collection, it turns out that the responses are possible ways
to identify individuals, the data will be treated as personal data and will be
stored in line with the project’s guidelines.
</td> </tr>
<tr>
<td>
6\. Other
</td>
<td>
▪ Refer to other national/funder/sectorial/departmental procedures for data
management that you are using (if any)
</td> </tr>
<tr>
<td>
No other procedures need to be put in place for project management data.
</td> </tr> </table>
**8 WP3: Data for providing personalised coaching.**
<table>
<tr>
<th>
DMP component Issues to be addressed
1\. Data summary
</th>
<th>
* State the purpose of the data collection/generation
* Explain the relation to the objectives of the project
* Specify the types and formats of data generated/collected
* Specify if existing data is being re-used (if any)
* Specify the origin of the data
* State the expected size of the data (if known)
* Outline the data utility: to whom will it be useful
</th> </tr>
<tr>
<td>
Data is collected for providing personalised coaching, in terms of coaching
strategy, manner of coaching, and timing in order to generate the content of
the conversations the individual coaches can have with the end user.
At the moment we do not have a clear picture on what kind of information will
be necessary to provide these conversations. It will be personal information
on different topics of coaching (e.g. Dietary, Activity).
In order to determine the correct coaching strategy datasets will be gathered
through 3 sources:
* Through wearable sensors (e.g. steps and heart rate monitors);
* Through surveys the user will fill out during the process;
* Through conversations the user will have with the different coaches.
At the moment, since we are still working out what kind of data will be
gathered, there has been no decision taken on the format of the data files nor
can we say anything about the size of the data.
No data is being re-used.
The data will be useful for other project partners and in the future for other
research and innovation groups or organizations developing virtual coaching
applications and researchers in the field of monitoring and coaching, as well
as medical professionals and health and life style specialists.
</td> </tr>
<tr>
<td>
FAIR Data 2.1. Making data findable, including provisions for metadata
</td>
<td>
* Outline the discoverability of data (metadata provision)
* Outline the identifiability of data and refer to standard identification mechanism. Do you make use of persistent and unique identifiers such as Digital Object Identifiers?
* Outline naming conventions used
* Outline the approach towards search keyword
* Outline the approach for clear versioning
* Specify standards for metadata creation (if any). If there are no standards in your discipline describe what type of metadata will be created and how
</td> </tr>
<tr>
<td>
No deviations from the intended FAIR principles are foreseen at this point.
</td> </tr>
<tr>
<td>
2.2 Making data openly accessible
</td>
<td>
* Specify which data will be made openly available? If some data is kept closed provide rationale for doing so
* Specify how the data will be made available
* Specify what methods or software tools are needed to access the data? Is documentation about the software needed to access the data included? Is it
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
possible to include the relevant software (e.g. in open source code)?
* Specify where the data and associated metadata, documentation and code are deposited
* Specify how access will be provided in case there are any restrictions
</th> </tr>
<tr>
<td>
It is unclear at this time whether responses will be contain any personal
information (“today is my birthday”, together with a time stamp) So, at the
moment it is unclear the amount of privacy the datasets will need. It is our
intention to make the datasets as open as possible, but if this turns out to
be violating privacy regulations, we choose to keep the datasets closed or
with very restricted access.
Since we do not know what the data set will look like, we do not have any
specific methods or software in mind to access the data. It is our intention
to make the data as accessible as possible this includes storing the data in a
broadly used file format.
No deviations from the intended FAIR principles are foreseen at this point.
</td> </tr>
<tr>
<td>
2.3. Making data interoperable
</td>
<td>
* Assess the interoperability of your data. Specify what data and metadata vocabularies, standards or methodologies you will follow to facilitate interoperability.
* Specify whether you will be using standard vocabulary for all data types present in your data set, to allow inter-disciplinary interoperability? If not, will you provide mapping to more commonly used ontologies?
</td> </tr>
<tr>
<td>
We will try to make the data as interoperable as possible, depending on how
the data set will look like.
No deviations from the intended FAIR principles are foreseen at this point.
</td> </tr>
<tr>
<td>
2.4. Increase data re-use (through ▪ Specify how the data will be licenced to
permit the
clarifying licences) widest reuse possible
* Specify when the data will be made available for reuse. If applicable, specify why and for what period a data embargo is needed
* Specify whether the data produced and/or used in the project is useable by third parties, in particular after the end of the project? If the re-use of some data is restricted, explain why
* Describe data quality assurance processes
* Specify the length of time for which the data will remain re-usable
</td> </tr>
<tr>
<td>
Depending on the content of the data set and whether it contains personal
information, re-use by third parties could be possible.
No deviations from the intended FAIR principles are foreseen at this point.
</td> </tr>
<tr>
<td>
3\. Allocation of resources
</td>
<td>
* Estimate the costs for making your data FAIR.
Describe how you intend to cover these costs
* Clearly identify responsibilities for data management in your project
</td> </tr>
<tr>
<td>
</td>
<td>
▪ Describe costs and potential value of long term preservation
</td> </tr>
<tr>
<td>
The work to be done in making the data FAIR will be covered by the regular
working budget for producing the deliverables. From RRD Harm op den Akker will
be responsible for the data management for this purpose.
</td> </tr>
<tr>
<td>
4\. Data security
</td>
<td>
▪ Address data recovery as well as secure storage and transfer of sensitive
data
</td> </tr>
<tr>
<td>
Non sensitive data is stored on a Dropbox folder for the entire consortium to
access. Sensitive data, in terms of personal data and privacy is stored on a
SharePoint portal, hosted at the University of Twente. Backups are made
through Dropbox and the UT ICT systems every 24 hours.
After the project, the data files will be anonymised before the will be posted
in an open repository.
Collection of the questionnaires will be done through the Qualtrics system. At
the moment there Is an agreement between SurfNet and Qualtrics, which would
make it sufficient for use within the Dutch university system. However it is
unclear if the necessary security measures are indeed in place. However we’ve
looked into different questionnaire systems and Qualtrics, despite the
limitations, still seems to be the best (and most secure) option available.
</td> </tr>
<tr>
<td>
5\. Ethical aspects
</td>
<td>
▪ To be covered in the context of the ethics review, ethics section of DoA and
ethics deliverables. Include references and related technical aspects if
not covered by the former
</td> </tr>
<tr>
<td>
When during data collected we figure that the responses are possible ways to
identify individuals, the data will be treated as personal data and will be
stored in line with the project’s guidelines.
The end user will receive an information leaflet and will sign a consent form.
This way we ensure the patient is fully informed about the nature of the
research and the data collection that takes place and they give their (full)
consent for the research.
Furthermore in case of accidental finding through the nature of the responses
we will contact the end user immediately.
</td> </tr>
<tr>
<td>
6\. Other
</td>
<td>
▪ Refer to other national/funder/sectorial/departmental procedures for data
management that you are using (if any)
</td> </tr>
<tr>
<td>
Any research data that cannot be made publically available will be stored
according to the archiving guidelines of RRD.
</td> </tr> </table>
## 9 WP4: Data for the development of the Holistic Behaviour Analysis
Framework
<table>
<tr>
<th>
DMP component Issues to be addressed
1\. Data summary
</th>
<th>
* State the purpose of the data collection/generation
* Explain the relation to the objectives of the project
* Specify the types and formats of data generated/collected
* Specify if existing data is being re-used (if any)
* Specify the origin of the data
* State the expected size of the data (if known)
* Outline the data utility: to whom will it be useful
</th> </tr>
<tr>
<td>
For the development of the Holistic Behaviour Analysis Framework, it is
necessary to:
1. Develop and validate new models to recognise users’ behaviours both in the interaction with the system and during their everyday affairs.
2. Extract features quantifying the user behaviour (physical, social, emotional, cognitive)
At this point is not possible to list the specific sensor data that will be
collected; therefore, we provide examples that reflect to a large extent what
we will eventually use for measuring user's behaviours.
We plan to collect diverse digital traces from the user explicit and implicit
interaction with their smartphones, smartwatches and ambient sensors.
Smartphone and smartwatch data logs will be stored temporarily on the devices
in a relational database (e.g., SQLite). The temporal data will be transmitted
over HTTPs in the form of data objects (e.g., JSON) to a secure server where
it is persisted in another relational database management system (e.g.,
MySQL).
The raw sensory data will be used to generate different levels of behavioural
data. Relevant features will be computed through mathematical and statistical
modelling. Examples of this features are "step counts" (calculated from the
raw acceleration data), "minutes spent in a given location" (calculated from
the raw Global Positioning System (GPS) data) or "heart rate variability"
(calculated from the raw photo-plethysmography data). These features will be
computed automatically and populated in the knowledge base. These features
will be also used as inputs to the machine learning models that will render a
decision on the current user behaviour (e.g., "resting" from steps counts and
minutes spent in a given location).
The collected data will be used to train and validate new machine learning
models aimed to recognise the behaviour of the users. At this point it is not
clear yet whether the data will be used for each user separately or in
combination as to create personalised or general models (most possibly both).
The size of the data sets is difficult to estimate since the choice of sensors
and sampling rates can fairly affect the amount of generated data. The size
can range from a few KB of data per user and day if sensors with low data
generation rate are considered (e.g., GPS) to tens of MB if highly data
productive sensors are considered (e.g., accelerometers). It is safe to say
that the collected raw sensor data will be around 10 MB per user and day. The
processed data will represent a fairly compressed version of the raw sensory
data, thus in the order of KBs.
The processed data will be used to populate the knowledge base, which will in
turn become available to different components of the Council-of-Coaches
system.
Moreover, the processed data could be of much relevance for other groups
conducting research in social and behavioural computing. Likewise, the raw
sensory datasets collected in this project, after full anonymization, could be
used in the benchmarking of new machine learning and artificial intelligence
models.
</td> </tr> </table>
<table>
<tr>
<th>
FAIR Data 2.1. Making data findable, including provisions for metadata
</th>
<th>
* Outline the discoverability of data (metadata provision)
* Outline the identifiability of data and refer to standard identification mechanism. Do you make use of persistent and unique identifiers such as Digital Object Identifiers?
* Outline naming conventions used
* Outline the approach towards search keyword
* Outline the approach for clear versioning
* Specify standards for metadata creation (if any). If there are no standards in your discipline describe what type of metadata will be created and how
</th> </tr>
<tr>
<td>
No deviations from the intended FAIR principles are foreseen at this point.
</td> </tr>
<tr>
<td>
2.2 Making data openly accessible
</td>
<td>
* Specify which data will be made openly available? If some data is kept closed provide rationale for doing so
* Specify how the data will be made available
* Specify what methods or software tools are needed to access the data? Is documentation about the software needed to access the data included? Is it possible to include the relevant software (e.g. in open source code)?
* Specify where the data and associated metadata, documentation and code are deposited
* Specify how access will be provided in case there are any restrictions
</td> </tr>
<tr>
<td>
It is unclear at this time from the data it will be possible to retrace a
person. (for instance through location tracking). At the moment it is unclear
the amount of privacy the datasets will need. It is our intention to make the
datasets as open as possible, but if this turns out to be violating privacy
regulations, we choose to keep the datasets closed.
Since we do not know what the data set will look like, we do not have any
specific methods or software in mind to access the data. The methods/software
may change depending on how the raw sensor data & behavioural data is
integrated in/with the knowledge bases.
Any software that allows access to relational databases (i.e., MySQL) will do.
Some examples of open source options are: DBeaver, SQLelectron or SequelPro.
It is our intention to make the data as accessible as possible this includes
storing the data in a broadly used file format.
No further deviations from the intended FAIR principles are foreseen at this
point.
</td> </tr>
<tr>
<td>
2.3. Making data interoperable
</td>
<td>
* Assess the interoperability of your data. Specify what data and metadata vocabularies, standards or methodologies you will follow to facilitate interoperability.
* Specify whether you will be using standard vocabulary for all data types present in your data set, to allow inter-disciplinary interoperability? If not, will you provide mapping to more commonly used ontologies?
</td> </tr> </table>
<table>
<tr>
<th>
We use standard models for encoding the data (e.g., JSON, CSV).
At this moment we do not plan to use any specific ontologies. We would have no
problem in either case to implement the necessary changes at the point of
need.
No further deviations from the intended FAIR principles are foreseen at this
point.
</th> </tr>
<tr>
<td>
2.4. Increase data re-use (through clarifying licences)
</td>
<td>
* Specify how the data will be licenced to permit the widest reuse possible
* Specify when the data will be made available for reuse. If applicable, specify why and for what period a data embargo is needed
* Specify whether the data produced and/or used in the project is useable by third parties, in particular after the end of the project? If the re-use of some data is restricted, explain why
* Describe data quality assurance processes
* Specify the length of time for which the data will remain re-usable
</td> </tr>
<tr>
<td>
Depending on the content of the data set
</td>
<td>
and whether it contains personal information, re-use by
</td> </tr>
<tr>
<td>
third parties could be possible.
No further deviations from the intended F
</td>
<td>
AIR principles are foreseen at this point.
</td> </tr>
<tr>
<td>
3\. Allocation of resources
</td>
<td>
* Estimate the costs for making your data FAIR.
Describe how you intend to cover these costs
* Clearly identify responsibilities for data management in your project
* Describe costs and potential value of long term preservation
</td> </tr>
<tr>
<td>
The work to be done in making the data
</td>
<td>
FAIR will be covered by the regular working budget for
</td> </tr>
<tr>
<td>
producing the deliverables. From CMC-BSS for this purpose.
</td>
<td>
Oresti Banos will be responsible for the data management
</td> </tr>
<tr>
<td>
4\. Data security
</td>
<td>
▪ Address data recovery as well as secure storage and transfer of sensitive
data
</td> </tr>
<tr>
<td>
An anonymised universal unique identifier
</td>
<td>
will be used to identify the data collected from each user.
</td> </tr>
<tr>
<td>
This identifier will in no way allow to reve
</td>
<td>
al the identity of the user.
</td> </tr>
<tr>
<td>
However there might be a combination
</td>
<td>
of data possible, with which you can identify a person, for
</td> </tr>
<tr>
<td>
example 24hour location tracking.
The raw sensor data will be transmitted ov
</td>
<td>
er HTTPs in the form of data objects (e.g., JSON) to a secure
</td> </tr>
<tr>
<td>
server where it is persisted in another rel
</td>
<td>
ational database management system (e.g., MySQL). Any
</td> </tr>
<tr>
<td>
further information on the server it at the
</td>
<td>
moment of writing not available yet.
</td> </tr>
<tr>
<td>
In all cases data will be stored according t
</td>
<td>
o the project’s guidelines on personal data.
</td> </tr>
<tr>
<td>
5\. Ethical aspects
</td>
<td>
▪ To be covered in the context of the ethics review, ethics section of DoA and
ethics deliverables. Include references and related technical aspects if
not covered by the former
</td> </tr>
<tr>
<td>
When during data collected we figure that the responses are possible ways to
identify individuals, the data will be treated as personal data and will be
stored as such.
The end user will receive an information leaflet and will sign a consent form.
This way we ensure the patient is fully informed about the nature of the
research and the data collection that takes place and they give their (full)
consent for the research.
Furthermore in case of accidental finding through the nature of the responses
we will contact the end user immediately.
</td> </tr>
<tr>
<td>
6\. Other
</td>
<td>
▪ Refer to other national/funder/sectorial/departmental procedures for data
management that you are using (if any)
</td> </tr>
<tr>
<td>
No other procedures need to be put in place for project management data.
</td> </tr> </table>
## 10 WP5: Data for Dialogue and Argumentation Framework
<table>
<tr>
<th>
DMP component Issues to be addressed
1\. Data summary
</th>
<th>
* State the purpose of the data collection/generation
* Explain the relation to the objectives of the project
* Specify the types and formats of data
generated/collected
* Specify if existing data is being re-used (if any)
* Specify the origin of the data
* State the expected size of the data (if known)
* Outline the data utility: to whom will it be useful
</th> </tr>
<tr>
<td>
Data will be collected through videotaping sessions where experts pretend-play
a coaching session with an actor. This way data can be collected to study the
verbal and nonverbal behaviours of the participants such as body language,
interaction between the different experts, and use of language and dialogue in
order to make the virtual coaching session as realistic as possible. These
sessions will be audio-recorded and videotaped from multiple angles.
The sessions will be annotated to generate annotation files. The analysis will
be conducted on different levels: at the behaviours level, we will annotate
with automatic tools (e.g. facial expression analysis, body movement analysis,
prosody analysis, text transcription, etc.) and manually whenever necessary.
We will also annotate higher level information such as level of engagement,
attitude the participants have toward each other, emotion, etc. Other
annotations may be conducted such as dialog strategies, turn taking dynamism,
and interruption types.
These annotations will be further analysed and used to build virtual coaches
with their own specific style that ought to be defined at different levels (cf
WP6): behaviour, emotional sensibility, attitude, interactional sensibility.
The format of the data will be audio and video stream using format such as
MP3, MPEG or AVI.
At the moment we are choosing between different annotation tool such as Elan
and NOVA (developed by Elisabeth André’s group at University of Augsburg). The
analysis of the data will be used to develop a computational model of the
virtual coach’s nonverbal behaviours. It will correspond to software code and
a library of behaviour defined in a lexicon (see d2.1 and D6.1).
Considering Re-use, at the moment we are looking at existing databases such as
the NoXi database available at _https://noxi.aria-agent.eu/_ , after signing
an End User License Agreement (EULA). We will refer to existing databases to
learn about specific phenomena (e.g. interruption). These databases are either
publicly available or have been gathered by ourselves (and are publicly
available after signing a EULA).
The re-enactment scenarios are provided by RRD and UDun.
The expected size of the data is going to be multiple GB’s depending on the
size of the videos. It should be large enough to allow for a machine learning
approach.
The data will be useful to UPMC to design different virtual coaches. The data
could also be used to study communicative behaviours, interactive behaviours,
engagement level, etc. Researchers from social sciences, computational
linguistic, affective computing and social signal processing may find the
databases very interesting, especially since it gathers data of group
conversation that have been barely available so far.
</td> </tr>
<tr>
<td>
FAIR Data 2.1. Making data findable, including provisions for metadata
</td>
<td>
* Outline the discoverability of data (metadata
provision)
* Outline the identifiability of data and refer to standard identification mechanism. Do you make use
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
of persistent and unique identifiers such as Digital Object Identifiers?
* Outline naming conventions used
* Outline the approach towards search keyword
* Outline the approach for clear versioning
* Specify standards for metadata creation (if any). If there are no standards in your discipline describe what type of metadata will be created and how
</th> </tr>
<tr>
<td>
No further deviations from the intended FAIR principles are foreseen at this
point.
</td> </tr>
<tr>
<td>
2.2 Making data openly accessible ▪ Specify which data will be made openly
available? If some data is kept closed provide rationale for doing so
* Specify how the data will be made available
* Specify what methods or software tools are needed to access the data? Is documentation about the software needed to access the data included? Is it possible to include the relevant software (e.g. in open source code)?
* Specify where the data and associated metadata, documentation and code are deposited
* Specify how access will be provided in case there are any restrictions
</td> </tr>
<tr>
<td>
No further deviations from the intended FAIR principles are foreseen at this
point.
</td> </tr>
<tr>
<td>
2.3. Making data interoperable
</td>
<td>
* Assess the interoperability of your data. Specify what data and metadata vocabularies, standards or methodologies you will follow to facilitate interoperability.
* Specify whether you will be using standard vocabulary for all data types present in your data set, to allow inter-disciplinary interoperability? If not, will you provide mapping to more commonly used ontologies?
</td> </tr>
<tr>
<td>
No further deviations from the intended FAIR principles are foreseen at this
point.
</td> </tr>
<tr>
<td>
2.4. Increase data re-use (through
clarifying licences)
</td>
<td>
* Specify how the data will be licenced to permit the widest reuse possible
* Specify when the data will be made available for reuse. If applicable, specify why and for what period a data embargo is needed
* Specify whether the data produced and/or used in the project is useable by third parties, in particular after the end of the project? If the re-use of some data is restricted, explain why
* Describe data quality assurance processes
* Specify the length of time for which the data will remain re-usable
</td> </tr>
<tr>
<td>
No further deviations from the intended FAIR principles are foreseen at this
point.
</td> </tr>
<tr>
<td>
3\. Allocation of resources
</td>
<td>
▪ Estimate the costs for making your data FAIR.
Describe how you intend to cover these costs
</td> </tr>
<tr>
<td>
</td>
<td>
* Clearly identify responsibilities for data management in your project
* Describe costs and potential value of long term preservation
</td> </tr>
<tr>
<td>
The work to be done in making the data FAIR will be covered by the regular
working budget for producing the deliverables. From UDUN Alison Pease will be
responsible for the data management for this purpose. From UPV Catherine
Pelachaud will be responsible for the transcribed files.
</td> </tr>
<tr>
<td>
4\. Data security
</td>
<td>
▪ Address data recovery as well as secure storage and transfer of sensitive
data
</td> </tr>
<tr>
<td>
The videos as captured during the interview with the experts will be stored
and shared through the University of Dundee’s own box.com solution. This
storage solution fully complies with the GDPR.
Access to the videos will be granted only by request through Mark Snaith.
</td> </tr>
<tr>
<td>
5\. Ethical aspects
</td>
<td>
▪ To be covered in the context of the ethics review, ethics section of DoA and
ethics deliverables. Include references and related technical aspects if not
covered by the former
</td> </tr>
<tr>
<td>
The experts will receive an information leaflet and will sign a consent form.
This way we ensure the person is fully informed about the nature of the
research and the data collection that takes place and they have options in
giving their consent for the use of data for the research.
</td> </tr>
<tr>
<td>
6\. Other
</td>
<td>
▪ Refer to other national/funder/sectorial/departmental procedures for data
management that you are using (if any)
</td> </tr>
<tr>
<td>
No other procedures need to be put in place for project management data.
</td> </tr> </table>
## 11 WP6: Data for Human Computer Interfaces
<table>
<tr>
<th>
DMP component Issues to be addressed
1\. Data summary
</th>
<th>
* State the purpose of the data collection/generation
* Explain the relation to the objectives of the project
* Specify the types and formats of data generated/collected
* Specify if existing data is being re-used (if any)
* Specify the origin of the data
* State the expected size of the data (if known)
* Outline the data utility: to whom will it be useful
</th> </tr>
<tr>
<td>
In order to design, implement and evaluate user interactions with the coaches,
the following kinds of data will be collected.
**Corpora (video and audio) of representative conversational behaviour by
humans, to be used for defining and generating Council of Coaches coaching
behaviour:**
Video and audio; might be published as corpus. For this data to be used and
stored consent from the human participant will need to be required. If no
consent can be required we will need to anonymise the video footage. It will
be stored in MP3 and MP4 or avi format.
**Annotations of video/audio/etc from experiments and corpora:**
The video and audio files will be annotated, in order to make the data more
accessible. The annotations are made in some kind of common annotation tool
(often ELAN); generally XML storage format will be used for the codes, and
word (or Open Office) documents for the coding scheme.
**Data from local study sessions (experimental and explorative)** :
Local study session will be conducted to analyse the impact of the system, the
behaviour of users with system, and the perception of system behaviour by
user. The following kinds of data will be collected:
* System logs (as described in chapter 12);
* Data collected using the sensor system (as described in chapter 9);
* Video and audio recordings for analysis of the user in interaction with the system. This will be stored in mp3, mp4 or avi format.
* Dialogue logs (speech and/or transcription: what has been said during the dialogue) in Elan, XML and doc format.
* Questionnaires and interviews with user about the experience. The format of these are still up for discussion.
**Data from demonstrator sessions (so not collected for a study, but to
showcase our system):** When the system is showcases, the system will
automatically collect the same kind of data as when it is in experimental
mode. This data will be discarded as soon as possible after the demo. However
we might opt to use some data as PR material. End Users will always be asked
to sign a consent form in that case.
**Processed data from studies fit for publication:**
We may want to publicly release some data collected in studies for journal
publication or presentations at conferences. This requires anonymization, as
well as the right kind of consent.
The data originates from experiments with humans and virtual demonstrators as
conducted on the platform as developed within the project.
The size of the data set will probably be several GB’s, depending on the
length and quality of the video footage.
</td> </tr> </table>
<table>
<tr>
<th>
Moreover, the raw sensory data as well as the processed data could be of much
relevance for other groups conducting research in social and behavioural
computing.
</th> </tr>
<tr>
<td>
FAIR Data 2.1. Making data findable, ▪ Outline the discoverability of data
(metadata including provisions for metadata provision)
* Outline the identifiability of data and refer to standard identification mechanism. Do you make use of persistent and unique identifiers such as Digital Object Identifiers?
* Outline naming conventions used
* Outline the approach towards search keyword
* Outline the approach for clear versioning
* Specify standards for metadata creation (if any). If there are no standards in your discipline describe what type of metadata will be created and how
</td> </tr>
<tr>
<td>
No further deviations from the intended FAIR principles are foreseen at this
point.
</td> </tr>
<tr>
<td>
2.2 Making data openly accessible
</td>
<td>
* Specify which data will be made openly available? If some data is kept closed provide rationale for doing so
* Specify how the data will be made available
* Specify what methods or software tools are needed to access the data? Is documentation about the software needed to access the data included? Is it possible to include the relevant software (e.g. in open source code)?
* Specify where the data and associated metadata, documentation and code are deposited
* Specify how access will be provided in case there are any restrictions
</td> </tr>
<tr>
<td>
No further deviations from the intended FAIR principles are foreseen at this
point.
</td> </tr>
<tr>
<td>
2.3. Making data interoperable
</td>
<td>
* Assess the interoperability of your data. Specify what data and metadata vocabularies, standards or methodologies you will follow to facilitate interoperability.
* Specify whether you will be using standard vocabulary for all data types present in your data set, to allow inter-disciplinary interoperability? If not, will you provide mapping to more commonly used ontologies?
</td> </tr>
<tr>
<td>
No further deviations from the intended FAIR principles are foreseen at this
point.
</td> </tr>
<tr>
<td>
2.4. Increase data re-use (through clarifying licences)
</td>
<td>
* Specify how the data will be licenced to permit the widest reuse possible
* Specify when the data will be made available for reuse. If applicable, specify why and for what period a data embargo is needed
* Specify whether the data produced and/or used in the project is useable by third parties, in particular after the end of the project? If the re-use of some data is restricted, explain why
</td> </tr>
<tr>
<td>
</td>
<td>
* Describe data quality assurance processes
* Specify the length of time for which the data will remain re-usable
</td> </tr>
<tr>
<td>
No further deviations from the intended FAIR principles are foreseen at this
point.
</td> </tr>
<tr>
<td>
3\. Allocation of resources
</td>
<td>
* Estimate the costs for making your data FAIR.
Describe how you intend to cover these costs
* Clearly identify responsibilities for data management in your project
* Describe costs and potential value of long term preservation
</td> </tr>
<tr>
<td>
The work to be done in making the data FAIR will be covered by the regular
working budget for producing the deliverables. From CMC-HMI Randy Klaassen
will be responsible for the data management for this purpose.
</td> </tr>
<tr>
<td>
4\. Data security
</td>
<td>
▪ Address data recovery as well as secure storage and transfer of sensitive
data
</td> </tr>
<tr>
<td>
Personal data that is collected goes on an encrypted hard disk that we can
carry around from location to location; a backup will then be made (encrypted
as well) and stored at HMI in a safe place, and password to encryption stored
behind lock and key.
The consent forms used for the video, audio and experimental data collection
will be stored in locked cupboard at HMI. In case of pseudonymisation, the key
to the consent / ptcp number is stored at a physically different location.
Any information used for publications or presentations will be made fully
anonymous, unless there is the right consent from the end user.
Data to be opened at the end of the project will be in line with the project’s
guidelines as well as the HMI group’s own data policy. (Since this policy is
still under development, no specifics can be given on what it will contain.
However at the end of the project, the data policy should be in place and
should be used as an additional guideline on data management.)
</td> </tr>
<tr>
<td>
5\. Ethical aspects
</td>
<td>
▪ To be covered in the context of the ethics review, ethics section of DoA and
ethics deliverables. Include references and related technical aspects if
not covered by the former
</td> </tr>
<tr>
<td>
The end user will receive an information leaflet and will sign a consent form.
This way we ensure the end user is fully informed about the nature of the
research and the data collection that takes place and they give their (full)
consent for the research.
</td> </tr>
<tr>
<td>
6\. Other ▪ Refer to other national / funder / sectorial /
departmental procedures for data management that you are using (if any)
</td> </tr>
<tr>
<td>
Data files that contain personal data that cannot be made openly available
will be stored according the UT-HMI data policy for 10 years in a secure
environment.
</td> </tr> </table>
# 12 WP7: Data for Continuous integration and Demonstration
<table>
<tr>
<th>
DMP component Issues to be addressed
1\. Data summary
</th>
<th>
* State the purpose of the data collection/generation
* Explain the relation to the objectives of the project
* Specify the types and formats of data generated/collected
* Specify if existing data is being re-used (if any)
* Specify the origin of the data
* State the expected size of the data (if known)
* Outline the data utility: to whom will it be useful
</th> </tr>
<tr>
<td>
To facilitate the integration of the various software components developed in
the project as well as maintaining two working demonstrators, several datasets
will be collected:
**Knowledge base:**
The information in the knowledge base is about the user and the environment
and is the one used by the system itself to provide its features. The
knowledge base is going to be implemented in cooperation with WP3.
The format is still to be decided but it is safe to assume it will be some
kind of database, whether relational, Non-SQL or semantic.
The data in the knowledge base will be collected from different sources. It
will be generated based on the raw sensor data coming from wearable sensors,
surveys with the end users, and through conversation the end user will have
with the system. The data will be updated during the trials, by means of
machine learning trough the user input of the users of the system.
For the size of the data it is difficult to estimate the exact amount of data
generated until the system is running as expected, especially if the format is
not decided yet. As a reference, a real-life deployment of universAAL that
gathers environment information generates around 4GB each two months. It is
safe to assume that the project’s technology can generate knowledge
information in the range of gigabytes per month.
The knowledge base will be used by developers and technicians in the project
for troubleshooting and debugging. Externally the knowledge base can be
interesting for researchers in the field of coaching, human behaviour and
other social sciences.
**System logs:**
To monitor the adequate operation of the entire system, analyse its
performance and issues, and troubleshoot any system errors that may happen.
The system logs will be useful for developers and technicians in the project
for troubleshooting and debugging.
**User-related logs:**
To monitor the interaction of the user with the system, to analyse how the
system is used, how many times, in which conditions, at which hours… This can
provide interesting info on how the system can used, and to troubleshoot any
errors: it gives information on how to reproduce errors and find out if it was
being improperly used. The user logs will be useful for developers and
technicians in the project for troubleshooting and debugging. The logs are
useful for behaviour analysis as well, to improve usability and to determine
whether the system is successful in its goals. Depending on what kind of
information the logs will contain, it might be of interest to social scientist
focussing on online behaviour of patients.
</td> </tr> </table>
<table>
<tr>
<th>
Probably all of the logs will be text files following commonly used logger
formats depending on the technology used. For instance, Java-based software
can use Log4j logs, which are commonly used. The information recorded in these
logs represent information generated by the programs themselves during their
execution. Therefore, its content depends on what the developers decided to
log at each point in their program. For software developed by the project it
can be determined by ourselves if need be. For external libraries not
developed by the project, the information recoded in their produced logs is
detailed in their respective documentation.
All data will be collected during the trials of the project and by testing the
prototypes. For the size of the data it is not possible to estimate in advance
an accurate measurement of the log files. These can be configured to record
more or less information based on what we finally need. It is also difficult
to estimate the exact amount of data generated until the system is running as
expected. As a reference: A recent pilot based on universAAL platform
generated around 10MB of logs per day. It is safe to assume that the project
can generate logs in the range of tens to hundreds of MB per day.
</th> </tr>
<tr>
<td>
FAIR Data 2.1. Making data findable, including provisions for metadata
</td>
<td>
* Outline the discoverability of data (metadata provision)
* Outline the identifiability of data and refer to standard identification mechanism. Do you make use of persistent and unique identifiers such as Digital Object Identifiers?
* Outline naming conventions used
* Outline the approach towards search keyword
* Outline the approach for clear versioning
* Specify standards for metadata creation (if any). If there are no standards in your discipline describe what type of metadata will be created and how
</td> </tr>
<tr>
<td>
At the moment we foresee the system logs only to be used during the project.
They are of no interest to keep for after the project, since they are used for
troubleshooting and analyse performance issues. The user logs could be of
interest to social scientist, but this will need to be determined after we’ve
decided on what kind of data will be collected.
For log files it is common practice to include the date of the log in the
filing name as well, so we will keep with this practice.
No further deviations from the intended FAIR principles are foreseen at this
point.
</td> </tr>
<tr>
<td>
2.2 Making data openly accessible
</td>
<td>
* Specify which data will be made openly available? If some data is kept closed provide rationale for doing so
* Specify how the data will be made available
* Specify what methods or software tools are needed to access the data? Is documentation about the software needed to access the data included? Is it possible to include the relevant software (e.g. in open source code)?
* Specify where the data and associated metadata, documentation and code are deposited
* Specify how access will be provided in case there are any restrictions
</td> </tr>
<tr>
<td>
**System logs:**
</td> </tr> </table>
<table>
<tr>
<th>
These can be, in principle, made public. However they are not expected to
offer much information of interest to others outside the project. There are
two possible restrictions to consider if this data would be published:
1. User-identifiable data: There may be log entries that could be used to infer user-related information. These should not be there in there in the first place if logs are going to be shared.
2. “Confidential” program logic: There could be logs produced by software that is considered confidential by some developer, and could be used for reverse-engineering.
Almost all logging libraries and utilities in any program can be configured to
follow a format that can be read by many log-analysis tools. These allow easy
reading of logs and can produce statistics out of them. Documentation about
logs will be provided with the log files.
The logs are regular files (text files) and can be made public in any online
method for file sharing.
**Knowledge base:**
This can be made public and could be one of the most interesting data to be
analysed. There is one possible restriction however: User-identifiable data:
There may be log entries that could be used to infer user information and
affect privacy restrictions (e.g. inferring private data by analysing
behaviour or when 24 hour location tracking is performed, home location and
therefore an identifiable person might be retraced).
There are some technologies that could facilitate making the database
accessible. For instance if the Knowledge base was implemented or connected to
FIWARE, it is possible to use CKAN Generic Enabler to publish its stored data
as publicly available CKAN repository. We will keep this in mind while
developing the database.
All databases (or similar data-storage technologies) provide some method or
language to access the data. For instance there is SQL for relational
databases or SPARQL for semantic stores. These can be used to read data in
bulk or performing complex queries for statistics. Public access to the
databases would require that these are exposed online with a query endpoint.
Public access to the databases would require that these are exposed online
with a query endpoint. Some engines already include query endpoints, but this
requires that the project maintains a server for accessing the hosted
database.
**Source Code:**
It is our intention to include the relevant software in open source, but it
will need to be discussed which part will be made accessible. Open source code
published by the project will be available in GitLab:
_https://gitlab.com/CouncilOfCoaches_ . This can be done free of charge. At
the moment this is an invitation only group, to be used for the project
members who have a Gitlab account. Parts of the software can be made available
at a later stage, while other parts can remain private.
No further deviations from the intended FAIR principles are foreseen at this
point.
</th> </tr>
<tr>
<td>
2.3. Making data interoperable
</td>
<td>
* Assess the interoperability of your data. Specify what data and metadata vocabularies, standards or methodologies you will follow to facilitate interoperability.
* Specify whether you will be using standard vocabulary for all data types present in your data set, to allow inter-disciplinary interoperability? If not, will you provide mapping to more commonly used ontologies?
</td> </tr> </table>
<table>
<tr>
<th>
System/User logs: These can follow established formats for logging that are
widely used and known by developers and technicians.
Knowledge base: Almost all possible options of technologies to be used in the
knowledge base follow a well-known format or query language.
The use of standards will depend on the underlying technology, but standards
will be used whenever possible. However, it is highly unlikely that there are
existing standards in this domain that cover all the needs of the project, so
some degree of custom modelling is expected.
No further deviations from the intended FAIR principles are foreseen at this
point.
</th> </tr>
<tr>
<td>
2.4. Increase data re-use (through ▪ Specify how the data will be licenced to
permit the
clarifying licences) widest reuse possible
* Specify when the data will be made available for reuse. If applicable, specify why and for what period a data embargo is needed
* Specify whether the data produced and/or used in the project is useable by third parties, in particular after the end of the project? If the re-use of some data is restricted, explain why
* Describe data quality assurance processes
* Specify the length of time for which the data will remain re-usable
</td> </tr>
<tr>
<td>
No further deviations from the intended FAIR principles are foreseen at this
point.
</td> </tr>
<tr>
<td>
3\. Allocation of resources
</td>
<td>
* Estimate the costs for making your data FAIR.
Describe how you intend to cover these costs
* Clearly identify responsibilities for data management in your project
* Describe costs and potential value of long term preservation
</td> </tr>
<tr>
<td>
The work to be done in making the data FAIR will be covered by the regular
working budget for producing the deliverables. UPV will be responsible for the
data management for this purpose. At the moment we are looking for the right
person to do this.
</td> </tr>
<tr>
<td>
4\. Data security
</td>
<td>
▪ Address data recovery as well as secure storage and transfer of sensitive
data
</td> </tr>
<tr>
<td>
At the moment we are looking into setting up a secure server to be able to
integrate the different parts of the software. This server will comply will
all privacy regulations. Communication with external sources will go through
an https connection and regular backups will be made.
Further decisions on the server have not been made.
</td> </tr>
<tr>
<td>
5\. Ethical aspects
</td>
<td>
▪ To be covered in the context of the ethics review, ethics section of DoA and
ethics deliverables. Include references and related technical aspects if
not covered by the former
</td> </tr>
<tr>
<td>
All personal data stored and processed will have the consent of the end users.
</td> </tr>
<tr>
<td>
6\. Other
</td>
<td>
▪ Refer to other national/funder/sectorial/departmental procedures for data
management that you are using (if any)
</td> </tr> </table>
No other procedures need to be put in place for project management data.
# 13 WP8: Data for Dissemination and Exploitation
<table>
<tr>
<th>
DMP component Issues to be addressed
1\. Data summary
</th>
<th>
* State the purpose of the data collection/generation
* Explain the relation to the objectives of the project
* Specify the types and formats of data generated/collected
* Specify if existing data is being re-used (if any)
* Specify the origin of the data
* State the expected size of the data (if known)
* Outline the data utility: to whom will it be useful
</th> </tr>
<tr>
<td>
For the dissemination of the project results, the communication to raise
awareness of the project in terms of training, ecosystem building. And the
pre-marketing for exploitation of results the following dataset will be
collected.
1. Publications in journals and on conferences describing project results. These publications will be published through green or gold open access publishing as much as possible. These will be available in pdf format. All partners will be responsible for disseminating the research data through papers. These results will be useful for researchers in the same or adjoining fields. The dataset will probably be limited to 1 GB of data.
2. Exploitation plans will be written in a .doc format. However depending on the level of detail described in the plans, these may be kept confidential within the consortium. The partners will be the author of the plans, but separate exploitation agreements might be set up with third parties. Both partners and third parties are expected to directly benefit from the exploitation of the project results. The dataset will probably be limited to 1 GB of data.
3. Ecosystem building for the Open Agent Platform. Our assumption at the moment is mainly that building an ecosystem will be an online activity, where the consortium will have no specific ownership of for example forum discussions. So the data will be created by the ecosystem and available and useful for everyone who’s interested.
4. Standardization activities. At the moment we assume that these will contain mainly .doc files or limited size. These could be of interest to the standardization community while developed within the consortium. The size of the files will be several MB’s.
5. Training activities. For now, we assume these training activities will consist of videos or presentations. These can be posted online for everyone to see. The video’s will be put in mp4 or similar format and be posted on an online platform to be expected to best reach our target groups. Presentations will be made in a PowerPoint of similar Open Office format and are expected to be used in the face to face training of different stakeholders. Our expectation at time of writing is that all materials will be posted online, being available for everyone. The size of the data set will be depending on the size of the video footage. But we assume it won’t exceed 10 GB.
Other communication items like websites, press releases, interviews and other
dissemination items will be made to create awareness of the project and to
create a community of interested subjects. All this information will be made
available at least through our own website. The website will be continued for
at least 2 years after the project has finished. At the moment we do not know
the size of the dataset, since it will depend on the content that will be
posted on there. Different target groups will be reached through several media
as described in the communication plan. (D8.2). The website will make use of
Google Analytics, in order to further improve the use of the website. Google
Analytics completely anonymises the data of the visitors of the website and
stores this information their own servers.
</td> </tr>
<tr>
<td>
FAIR Data 2.1. Making data findable, including provisions for metadata
</td>
<td>
▪ Outline the discoverability of data (metadata provision)
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
* Outline the identifiability of data and refer to standard identification mechanism. Do you make use of persistent and unique identifiers such as Digital Object Identifiers?
* Outline naming conventions used
* Outline the approach towards search keyword
* Outline the approach for clear versioning
* Specify standards for metadata creation (if any). If there are no standards in your discipline describe what type of metadata will be created and how
</th> </tr>
<tr>
<td>
Data related to communication, dissemination and pre-marketing will be
findable –to the best of the consortiums’ capacity- utilizing digital
communications best practices, e.g. hashtag, metadata, keywords. In social
media, Council of Coaches posts will be findable and discoverable by the name,
e.g. @Council_Coaches, while for posts to different media (e.g. 3 rd party
blogs), the posts will refer to the project website, i.e. council-of-
coaches.eu.
At this moment we foresee no separate datasets to be posted in repositories at
the end of the project.
</td> </tr>
<tr>
<td>
2.2 Making data openly accessible
</td>
<td>
* Specify which data will be made openly available? If some data is kept closed provide rationale for doing so
* Specify how the data will be made available
* Specify what methods or software tools are needed to access the data? Is documentation about the software needed to access the data included? Is it possible to include the relevant software (e.g. in open source code)?
* Specify where the data and associated metadata, documentation and code are deposited
* Specify how access will be provided in case there are any restrictions
</td> </tr>
<tr>
<td>
Most of this data will be made public, although there might be made an
exception when it comes to data concerning the exploitation of results. We
foresee most data will be published online, just not in online repositories,
since it does not contain specific research data.
</td> </tr>
<tr>
<td>
2 2.3. Making data interoperable
</td>
<td>
* Assess the interoperability of your data. Specify what data and metadata vocabularies, standards or methodologies you will follow to facilitate interoperability.
* Specify whether you will be using standard vocabulary for all data types present in your data set, to allow inter-disciplinary interoperability? If not, will you provide mapping to more commonly used ontologies?
</td> </tr>
<tr>
<td>
This is not applicable for data related to communication, dissemination and
pre-marketing.
</td> </tr>
<tr>
<td>
2.4. Increase data re-use (through clarifying licences)
</td>
<td>
* Specify how the data will be licenced to permit the widest reuse possible
* Specify when the data will be made available for reuse. If applicable, specify why and for what period a data embargo is needed
</td> </tr>
<tr>
<td>
</td>
<td>
* Specify whether the data produced and/or used in the project is useable by third parties, in particular after the end of the project? If the re-use of some data is restricted, explain why
* Describe data quality assurance processes
* Specify the length of time for which the data will remain re-usable
</td> </tr>
<tr>
<td>
Data related to communication, dissemination and pre-marketing will be allowed
for reuse, following standard digital practices, i.e. naming the source of the
information (e.g. _http://council-ofcoaches.eu/news/xxx_ ) . In addition,
photos from the consortium maybe release under specific Creative Commons
licenses.
</td> </tr>
<tr>
<td>
3\. Allocation of resources
</td>
<td>
* Estimate the costs for making your data FAIR.
Describe how you intend to cover these costs
* Clearly identify responsibilities for data management in your project
* Describe costs and potential value of long term preservation
</td> </tr>
<tr>
<td>
There are no costs related to making data FAIR. Costs related to the support
and maintenance of the digital infrastructures are not considered, as they
will occur in either way.
</td> </tr>
<tr>
<td>
4\. Data security
</td>
<td>
▪ Address data recovery as well as secure storage and transfer of sensitive
data
</td> </tr>
<tr>
<td>
Information posted on the website will be posted through WordPress. They will
provide backups of the website as well. For other purposes: non sensitive data
is stored on a Dropbox folder for the entire consortium to access. Sensitive
data, in terms of personal data and privacy is stored on a SharePoint portal,
in line with the project’s guidelines.
Privacy statements will be provided on the website and the newsletter.
The newsletter is send through MailChimp, which will provide their own
backups. All registered users can opt out of the newsletter at any time.
</td> </tr>
<tr>
<td>
5\. Ethical aspects
</td>
<td>
▪ To be covered in the context of the ethics review, ethics section of DoA and
ethics deliverables. Include references and related technical aspects if
not covered by the former
</td> </tr>
<tr>
<td>
All participants in the consortium have agreed with posting their pictures
online for dissemination items and project updates.
</td> </tr>
<tr>
<td>
6\. Other ▪ Refer to other
national/funder/sectorial/departmental procedures for data management that you
are using (if any)
</td> </tr>
<tr>
<td>
No other procedures need to be put in place for project management data.
</td> </tr> </table>
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1341_PoliVisu_769608.md
|
# Executive Summary
PoliVisu aims to establish the use of big data and data visualisation as an
integral part of policy making, particularly, but not limited to, the local
government level and the mobility and transport policy domain. The project’s
relation with data is therefore essential and connatural to its experimental
research objectives and activities.
Additionally, the consortium has adhered to the H2020 ORDP (Open Research Data
Pilot) convention with the EC, which explicitly caters for the delivery of a
DMP (Data Management Plan).
According to the PoliVisu DoA (2017), data management planning, monitoring and
reporting is part of WP2 - the Project and Quality Management work package -
and foresees the delivery of four consecutive editions of the DMP at months 6,
12, 24 and 36.
This first edition, delivered in May 2018 as Deliverable 2.10, was not a mere
collection of principles, as it set the stage for the ongoing and next
activities handling with data, before and even after the PoiVisu project is
completed. As per the DoA provisions: “ _DMP describes the data management
lifecycle for all data sets that will be collected, processed or generated by
the research project. It is a document outlining how research data will be
handled during a research project, and even after the project is completed,
describing what data will be collected, processed or generated and following
what methodology and standards, whether and how this data will be shared
and/or made open, and how it will be curated and preserved”._
An open question, apparently unfulfilled by D2.10, was related to the
identification of which data sets would be relevant for PoliVisu (action)
research. An Annex to the Deliverable, entitled “Running list of data
sources”, was left blank with the promise of being taken care of at a later
project stage.
In practice, what happened in the consortium’s life was the release - in month
7 - of Deliverable 6.2
“Baseline Analysis”, authored by Joran Van Daele from Gent together with other
colleagues from AIV, GEOS, ISSY and PILSEN. With that deliverable, which was
coming up as a specification of D6.1, where the different needs and scenarios
of the three pilot cities (namely Gent, Pilsen and Issy-Les-Moulineaux) had
been described in detail, an extensive overview was provided of the already
available datasets in each city, as well as the missing ones at the date of
release of that Deliverable.
Therefore, this second edition of the PoliVisu DMP takes stock of the baseline
analysis carried out in D6.2 and uses the information contained therein to
fill in the Annex entitled “Running list of data sources”. This is the major
change with respect to the first edition.
Globally speaking, the original structure of D2.10 has been preserved while
working at the release of D2.11 and the contents of the first edition repeat
themselves quite identically in this document. The reader who is already
familiar with D2.10 may be puzzled by this decision. In fact, the proposed
approach is perfectly coherent with the conception of the DMP as a “living
Deliverable”, being updated at periodic dates following the progress in the
underlying project related activities.
In conclusion, and apart from minor corrections of (mainly) typos all along
the preexisting text, only Sections 4 and Annex 1 of this edition are new. Of
course, and until further advice, all Sections of this DMP remain valid and
enforceable - for what pertains to or influences the daily work of the
PoliVisu partners.
The structure of this document is as follows:
* **Section 1** presents PoliVisu’s data management lifecycle and frames the DMP within the EU H2020 Guidelines and FAIR data handling principles, thus setting the stage for the following parts.
* **Section 2** is a brief overview of the legal framework, including the EU regulation on personal data protection (GDPR), the H2020 provisions for open access to research data, the specific provisions of
the PoliVisu Grant Agreement and Consortium Agreement and some special
provisions for big data management.
* The core of the DMP is **Section 3** , in which the data usage scenarios are presented and the key issues to be examined in relation to each scenario are discussed. These issues include decisions on e.g. data anonymization, privacy and security protection measures, licensing etc.
* **Section 4** concludes the document by anticipating the expected contents of future editions of the DMP.
For completeness of information, the reader interested in getting to know how
the PoliVisu consortium plans to deal with data may also refer, in addition to
this DMP and the already mentioned D6.2 (Baseline Analysis), to the following,
already or soon to be published, deliverables: D1.1 (Ethical Requirement No.
4), D1.3 (Ethical Requirement No. 3), D2.2 (Project Management Plan), D2.3
(Quality and Risk Plan), D6.1 (Pilot Scenarios), D7.1 (Evaluation Plan) and
D8.1 (Impact Enhancement Road Map).
# Introduction
Visualisation and management of (big) data in a user friendly way for public
administration bodies is one of the primary goals of the PoliVisu project. The
intention is to support integration of (big) data into policy and decision
making processes. The project’s relation with data is therefore essential and
connatural to its experimental research objectives and activities.
Additionally, the consortium adhered to the H2020 ORDP (Open Research Data
Pilot) convention with the EC, which explicitly caters for the delivery of a
DMP (Data Management Plan).
According to the PoliVisu DoA (2017), data management planning, monitoring and
reporting is part of WP2 - the Project and Quality Management work package -
and foresees the delivery of four consecutive editions of the DMP at months 6,
12, 24 and 36. The first edition, however, was not a mere collection of
principles, as it set the stage for the ongoing and next activities handling
with data, before and even after the project is completed.
In the following text, we reproduce the contents of Section 1 of D2.10 almost
unchanged, apart from some corrections of typos or equivalent modifications
aimed at improving readability and understandability. In case the reader is
already familiar with D2.10 contents, the remainder of this Section may be
skipped at all.
## The PoliVisu Data Management Lifecycle
As per the DoA provisions, the PoliVisu DMP “ _describes the data management
lifecycle_ 1 _for all data sets that will be collected, processed or
generated by the research project. It is a document outlining how research
data will be handled during a research project, and even after the project is
completed, describing what data will be collected, processed or generated and
following what methodology and standards, whether and how this data will be
shared and/or made open, and how it will be curated and preserved”._
This paragraph summarizes the management procedures that will be followed when
dealing with the data of relevance for the PoliVisu project, and which will be
further described in Section 3 of this document. To get a prompt overview of
the relevant datasets for the three pilot cities, the interested reader is
referred to Annex 1 to this document, entitled “Running list of data
sources”.
After an internal discussion, also during the May 2018 held in Issy Les
Moulineaux, the partners agreed to envisage **three main data usage
scenarios** :
1. Original data produced by the PoliVisu consortium and/or individual members of it (e.g. during a dissemination action or a pilot activity);
2. Existing data already in possession of the PoliVisu consortium and/or individual members of it prior to the project’s initiation;
3. Existing data sourced/procured by the PoliVisu consortium and/or individual members of it during the project’s timeline.
For each of the above scenarios, the key issues to be examined are displayed
by the following logic tree:
**Figure 1 – The PolIVisu Data Management Life Cycle**
For each dataset (or even data point) handled in the project, the first level
of control/decision making must deal with its **nature** , notably whether
it has been (or will be) deemed Confidential, or Anonymised and Public (it
cannot be that the two latter things diverge, apart from very special
occasions, which are coped with in the third logical category displayed in the
picture).
Depending on the assessment of nature, the resulting, mandatory **action
lines** can then be summarized as follows:
* For any acknowledged **Confidential** 2 dataset (or data point), the Consortium and/or each Partner in charge of its handling shall control (if existing) or define (if not) the **Licensing rules** and the **Privacy and security measures** (to be) adopted in the process.
* For any acknowledged **Anonymised and Public** dataset (or data point), the only relevant discipline to be clarified is the set of **Open Access rules** that apply to the case. This set is little controversial for PoliVisu, as the ODRP convention has been adopted, as specified above. Note that the use of open data across the PoliVisu pilots, including e.g. Open Transport Maps or Open Land Use Maps, falls in this category.
* Any dataset (or data point) that does not belong to any of the former two categories is subject to an additional level of action by the Consortium and/or Partner in charge, leading to its classification as either Confidential or Anonymised and Public. In that regard, the two, mutually exclusive action items belonging to this level are:
○ the **anonymisation for publication** action, leading to the migration to
the second category of data, or
○ the adoption of appropriate **privacy and security measures** (very likely
the same applied to the category of Confidential data) in case anonymisation
is not carried out for whatever legitimate reason. Note that in this latter
case, i.e. without anonymisation, **no licensing rules are applicable**
(i.e. the PoliVisu consortium rejects the commercialisation of the personal
profiles of human beings as a non-ethical practice).
## Reference Framework and Perimeter of the DMP
The following picture – borrowed from the official EU H2020 information portal
3 \- clearly identifies the positioning of the DMP in the context of projects
that – like PoliVisu – have voluntarily adhered to the Pilot on Open Research
Data in Horizon 2020 4 .
**Figure 2: Open access to scientific publications and research data in the
wider context of a project’s dissemination and exploitation (source: European
Commission, 2017)**
As can be seen, a DMP holds the same status and relevance as the project’s
Dissemination Plan 4 . More specifically, in the former document, one should
retrieve the full list of research data and publications that the project will
deliver, use or reuse, as well as the indication of whether some data will be
directly exploited by the Consortium, having been patented or protected in any
other possible form. In the latter document, one should retrieve the
Consortium’s detailed provisions for all data and publications that can be
shared with interested third parties, with or without the payment of a fee 5
.
In particular, the following definitions – all taken from the aforementioned
EU H2020 portal – shall apply to our discourse:
* **Access** : “ _the right to read, download and print – but also the right to copy, distribute, search, link, crawl and mine_ ”;
* **Research Data** : “ _[any] information, in particular facts or numbers, collected to be examined and considered as a basis for reasoning, discussion, or calculation. In a research context, examples of data include statistics, results of experiments, measurements, observations resulting from fieldwork, survey results, interview recordings and images. The focus is on research data that is available in digital form_ ”;
* **Scientific Publications** : “ _journal article[s],_ … _monographs, books, conference proceedings, [and] grey literature (informally published written material not controlled by scientific publishers)”_ , such as reports, white papers, policy/position papers, etc.;
* **Open Access Mandate** : “ _comprises 2 steps: depositing publications in repositories [and] providing open access to them_ ”. Very importantly, these steps “ _may or may not occur simultaneously_ ”, depending on conditions that will be explained below:
* **“Green” Open Access (aka Self-Archiving)** : it is granted when the final, peer-reviewed manuscript is deposited by its authors in a repository of their choice. Then open access must be ensured within at most 6 months (12 months for publications in the social sciences and humanities). Thus, open access may actually follow with some delay (due to the so-called “embargo period”);
* **“Gold” Open Access (aka Open Access Publishing)** : it is granted when the final, peer-reviewed manuscript is immediately available on the repository where it has been deposited by its authors (without any delay or “embargo period”). Researchers can also decide to publish their work in open access journals, or in hybrid journals that both sell subscriptions and offer the option of making individual articles openly accessible. In the latter case, the so-called “article processing charges” are eligible for reimbursement during the whole duration of the project (but not after the end of it).
In the PoliVisu **DoA** (2017), the following provisions for Open Access
were defined, which have become part of the Grant Agreement (GA) itself:
_“PoliVisu will follow the Open Access mandate for its publications and will
participate in the Open Research Data pilot, so publications must be published
in Open Access (free online access). Following the list of deliverables, the
consortium will determine the appropriate digital objects that will apply to
the Data Management Plan. Each digital object, including associated metadata,
will be deposited in the institutional repository of Universitat Politècnico
Milano, whose objective is to offer Internet access for university's
scientific, academic and corporate university in order to increase their
visibility and make it accessible and preservable.”_ Evidently, these
provisions belong to the **“Green” Open Access** case.
As far as patenting or other form of protection of research results is
concerned (the bottom part of Figure 2), the ground for this has been paved by
the PoliVisu Consortium Agreement (2017) - following the DoA, which recognises
that _“formal management of knowledge and intellectual property rights (IPR)
is fundamental for the effective cooperation within the project lifetime and
the successful exploitation of the PoliVisu Framework and tools within and
after the end of the project”_ . Further steps towards a clarification of the
licensing mechanisms will be taken in the context of the 3 foreseen editions
of the Business and Exploitation Plan in the context of WP8 (deliverables D8.3
due at month 12, D8.6 due at month 24 and D8.10 due at month 34). As a general
principle, the GA article 26.1 is faithfully adopted in the PoliVisu
Consortium Agreement (CA), according to which “ _Results are owned by the
Party that generates them_ ”. This is what article 8.1 states. And in
addition, article 8.2 specifies that “ _in case of joint ownership, each of
the joint owners shall be entitled to Exploit the joint Results as it sees
fit, and to grant non-exclusive licences, without obtaining any consent from,
paying compensation to, or otherwise accounting to any other joint owner,
unless otherwise agreed between the joint owners_ ”.
We take the above provisions also as a **guideline for the attribution of
responsibilities of data management** , as far as PoliVisu research results
are concerned. Very shortly, we posit that **ownership goes hand in hand
with the responsibility for data management** . The latter involves the same
project partner(s) who generate new data, individually or jointly. In case of
reuse of existing data, i.e. owned by someone else (a third party or another
PoliVisu partner), the individual or joint responsibility is to **check the
nature of data** (as specified in Figure 1 above) and **undertake the
consequent actions** as will be further described also in Section 3 below.
## Alignment to the Principles of FAIR Data Handling
Generally speaking, a good DMP under H2020 should comply with the FAIR Data
Handling Principles. FAIR stands for Findable, Accessible, Interoperable and
Re-usable, as referred to a project’s research outputs – notably those made
available in digital form.
The FAIR principles, however, do not belong to H2020 or the EC but have
emerged in January 2014, as the result of an informal working group convened
by the Netherlands eScience Center and the Dutch Techcentre for the Life
Sciences at the Lorentz Center in Leiden, The Netherlands 6 .
Very pragmatically, the European Commission (2016) considers the FAIR
principles fulfilled if a DMP includes the following information:
1. _“The handling of research data during and after the end of the project”_
2. _“What data will be collected, processed and/or generated”_
3. _“Which methodology and standards will be applied”_
4. _“Whether data will be shared/made open access”, and_
5. _“How data will be curated and preserved (including after the end of the project)”._
In the case of PoliVisu, the above information is provided in Section 3 of
this document, which consists of five paragraphs, respectively:
1. Data summary ( _typologies and contents of data collected and produced_ )
2. Data collection ( _which procedures for collecting which data_ )
3. Data processing ( _which procedures for processing which data_ )
4. Data storage ( _data_ _preservation and archiving during and after the project_ )
5. Data sharing (i _ncluding provisions for open access_ )
The following table matches the aforementioned EC requirements with the
contents dealt with in Section 3 paragraphs.
**Table 1. Alignment between this DMP and the EC’s requirements**
<table>
<tr>
<th>
**This document’s Section 3 TOC**
**EC requirements**
</th>
<th>
**3.1 Data Summary**
</th>
<th>
**3.2 Data**
**Collection**
</th>
<th>
**3.3 Data**
**Processing**
</th>
<th>
**3.4 Data**
**Storage**
</th>
<th>
**3.5 Data**
**Sharing**
</th> </tr>
<tr>
<td>
**A. “The handling of research data during and after the end of the project”**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**B. “What data will be collected, processed and/or generated”**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**C. “Which methodology and standards will be applied”**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**D. “Whether data will be shared/made open access”**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**E. “How data will be curated and preserved (including after the end of the
project)”**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
This Introduction has presented PoliVisu’s data management lifecycle and
frames the DMP within the EU H2020 Guidelines and FAIR data handling
principles. The remaining structure of this document comes as follows:
* **Section 2** is a brief overview of the legal framework, including the EU regulation on personal data protection (GDPR), the H2020 provisions for open access to research data, the specific provisions of the PoliVisu grant agreement and consortium agreement and some special provisions for big data.
* **Section 3** presents and discusses the data usage scenarios in the framework outlined in the above Table and examines the key issues in relation to each scenario. These issues include decisions on e.g. data anonymization, privacy and security protection measures, licensing etc.
* **Section 4** concludes the document by anticipating the expected contents of future editions of the DMP.
* In **Annex 1** the interested reader can find a running list of utilized / relevant data sources, which will be further updated over the course of the project.
# Legal framework
This Section briefly overviews the key normative references making up the DMP
external context. The next paragraphs respectively deal with:
1. The PSI Directive and its recent modifications and revisions proposals (dated April 2018);
2. The General Data Protection Regulation, coming into force in May 2018;
3. The terms of the H2020 Open Research Data Pilot (ORDP) the PoliVisu consortium has adhered to;
4. The resulting, relevant provisions of both the Grant and the Consortium Agreements;
5. The special provisions for big data management mentioned in the DoA and thus become binding for all partners;
6. A general outline of PoliVisu’s licensing policy.
In the following text, we reproduce the contents of Section 2 of D2.10 almost
unchanged, apart from some corrections of typos or equivalent modifications
aimed at improving readability and understandability. In case the reader is
already familiar with D2.10 contents, the remainder of this Section may be
skipped almost completely, with the only exception of Section 2.7 that has
been created from scratch to deepen the level of analysis on data
anonymisation.
## The PSI Directive
The Directive 2003/98/EC on the re-use of Public Sector Information (PSI)
entered into force on 31 December 2003. It was revised by the Directive
2013/37/EU, which entered into force on 17 July 2013. The consolidated text
resulting from the merge of these two legislative documents is familiarly
known as the PSI Directive, and can be consulted on the Eur-Lex website 7 .
On 25 April 2018, the EC adopted a proposal for a revision of the PSI
Directive, which was presented as part of a package of measures aiming to
facilitate the creation of a common data space in the EU. This review also
fulfils the revision obligation set out in Article 13 of the PSI Directive.
The proposal has received a positive opinion from the Regulatory Scrutiny
Board and is now being discussed with the European Parliament and the Council.
It comes as the result of an extensive public consultation process, an
evaluation of the current legislative text and an impact assessment study done
by an independent contractor 8 .
The current PSI Directive and its expected evolution is noteworthy and useful
to define the context of the PoliVisu project in general and of this DMP in
particular. Thanks to the PSI Directive and its modifications and
implementations 9 , the goal of making government data and Information
reusable has become shared at broad European level. In addition, the awareness
has been remarkably growing that as a general principle, the datasets where
PSI is stored must be set free by default. However, fifteen years after the
publication of the original PSI Directive, there are still barriers to
overcome (better described in the aforementioned impact assessment study) that
prevent the full reuse of government data and information, including data
generated by the public utilities and transport sectors as well as the results
from public funded R&D projects, two key areas of attention for PoliVisu and
this DMP.
## The EU Personal Data Protection Regulation (GDPR)
Regulation (EU) 2016/679 sets out the new General Data Protection Regulation
(GDPR) framework in the EU, notably concerning the processing of personal data
belonging to EU citizens by individuals, companies or public sector/non
government organisations, irrespective of their localization. It is therefore
a primary matter of concern for the PoliVisu consortium.
The GDPR was adopted on 27 April 2016, but will become enforceable on 25 May
2018, after a two-year transition period. By then, it will replace the current
Data Protection Directive (95/46/EC) and its national implementations. Being a
regulation, not a directive, GDPR does not require Member States to pass any
enabling legislation and is directly binding and applicable.
The GDPR provisions do not apply to the processing of personal data of
deceased persons or of legal entities. They do not apply either to data
processed by an individual for purely personal reasons or activities carried
out at home, provided there is no connection to a professional or commercial
activity. When an individual uses personal data outside the personal sphere,
for socio-cultural or financial activities, for example, then the data
protection law has to be respected.
On the other hand, the legislative definition of personal data is quite broad,
as it includes any information relating to an individual, whether it relates
to his or her private, professional or public life. It can be anything from a
name, a home address, a photo, an email address, bank details, posts on social
networking websites, medical information, or a computer’s IP address.
While the specific requirements of GDPR for privacy and security are
separately dealt with in other PoliVisu Deliverables (such as D1.1 on POPD
Requirement No. 4 due by month 6 and D1.2 on POPD Requirement No.
6 delivered at month 3, as well as D4.5 & D4.6 on Privacy rules and data
anonymization, due by months 24 & 30 respectively) it is worth noting here
that the PoliVisu consortium has formed a working group composed of the
partner organisations Data Protection Officers (DPOs). The DPO function and
role has been introduced by the GDPR and better defined by a set of EC
guidelines, given on 13 December 2016 and revised on 5 April 2017 10 .
The GDPR text is available on the Eur-Lex website 11 .
## Open Access in Horizon 2020
As partly anticipated in Section 1, the EC has launched in H2020 a flexible
pilot for open access to research data (ORDP), aiming to improve and maximise
access to and reuse of research data generated by funded R&D projects, while
at the same time taking into account the need to balance openness with privacy
and security concerns, protection of scientific information, commercialisation
and IPR. This latter need is crystallised into an opt-out rule, according to
which it is possible at any stage - before or after the GA signature - to
withdraw from the pilot, but legitimate reasons must be given, such as
IPR/privacy/data protection or national security concerns.
With the Work Programme 2017 the ORDP has been extended to cover all H2020
thematic areas by default. This has particularly generated the obligation for
all consortia to deliver a Data Management Plan (DMP), in which they specify
what data the project will generate, if it will not be freely disclosed for
e.g. exploitation related purposes or how it will be made accessible for
verification and reuse, and how it will be curated and preserved.
The ORDP applies primarily to the data needed to validate the results
presented in scientific publications. Other data can however be provided by
the beneficiaries of H2020 projects on a voluntary basis.
The costs associated with the Gold Open Access rule, as well as the creation
of the DMP, can be claimed as eligible in any H2020 grant.
As already mentioned, the PoliVisu consortium has adhered to the **Green
Open Access** rule.
## Grant Agreement and Consortium Agreement provisions
The key GA and CA provisions worth mentioning in relation to our discourse on
data management have been already introduced to a great extent in the previous
Sections. Now we simply reproduce the corresponding articles.
### Grant Agreement
_24.1 Agreement on background_
The beneficiaries must identify and agree (in writing) on the background for
the action (‘agreement on background’).
‘Background’ means any data, know-how or information — whatever its form or
nature (tangible or intangible), including any rights such as intellectual
property rights — that: (a) is held by the beneficiaries before they acceded
to the Agreement, and (b) is needed to implement the action or exploit the
results.
_26.1 Ownership by the beneficiary that generates the results_ Results are
owned by the beneficiary that generates them.
‘Results’ means any (tangible or intangible) output of the action such as
data, knowledge or information — whatever its form or nature, whether it can
be protected or not — that is generated in the action, as well as any rights
attached to it, including intellectual property rights.
_26.2 Joint ownership by several beneficiaries_ Two or more beneficiaries own
results jointly if: (a) they have jointly generated them and (b) it is not
possible to:
1. establish the respective contribution of each beneficiary, or
2. separate them for the purpose of applying for, obtaining or maintaining their protection.
1. _Obligation to disseminate results_
Unless it goes against their legitimate interests, each beneficiary must — as
soon as possible — ‘disseminate’ its results by disclosing them to the public
by appropriate means (other than those resulting from protecting or exploiting
the results), including in scientific publications (in any medium).
2. _Open access to scientific publications_
Each beneficiary must ensure open access (free of charge online access for any
user) to all peer-reviewed scientific publications relating to its results.
3. _Open access to research data_
Regarding the digital research data generated in the action (‘data’), the
beneficiaries must:
(a) deposit in a research data repository and take measures to make it
possible for third parties to access, mine, exploit, reproduce and disseminate
— free of charge for any user — the following: (i) the data, including
associated metadata, needed to validate the results presented in scientific
publications as soon as possible;
(ii) other data, including associated metadata, as specified and within the
deadlines laid down in the 'data management plan');
(b) provide information — via the repository — about tools and instruments at
the disposal of the beneficiaries and necessary for validating the results
(and — where possible — provide the tools and instruments themselves).
(...)
As an exception, the beneficiaries do not have to ensure open access to
specific parts of their research data if the achievement of the action's main
objective, as described in Annex 1, would be jeopardised by making those
specific parts of the research data openly accessible. In this case, the data
management plan must contain the reasons for not giving access.
_39.2 Processing of personal data by the beneficiaries_
The beneficiaries must process personal data under the Agreement in compliance
with applicable EU and national law on data protection (including
authorisations or notification requirements). The beneficiaries may grant
their personnel access only to data that is strictly necessary for
implementing, managing and monitoring the Agreement.
### Consortium Agreement
_Attachment 1: Background included_
According to the Grant Agreement (Article 24) Background is defined as “data,
know-how or information (…) that is needed to implement the action or exploit
the results”. Because of this need, Access Rights have to be granted in
principle, but Parties must identify and agree amongst them on the Background
for the project. This is the purpose of this attachment 12 .
(...)
As to EDIP SRO, it is agreed between the Parties that, to the best of their
knowledge, The following background is hereby identified and agreed upon for
the Project: (...)
Algorithms for the analysis of data characterizing the traffic flow from
automatic traffic detectors. Mathematical model of traffic network of roads in
the Czech Republic, including car traffic matrix.
(...)
As to HELP SERVICE REMOTE SENSING SRO, it is agreed between the Parties that,
to the best of their knowledge, The following background is hereby identified
and agreed upon for the Project: (...) Metadata Catalogue Micka.
Senslog Web Server.
HSLayers NG.
Mobile HSLayers NG Cordova.
VGI Apps.
(...)
As to GEOSPARC NV, it is agreed between the Parties that, to the best of their
knowledge, The following background is hereby identified and agreed upon for
the Project: (...) geomajas (http://www.geomajas.org).
INSPIRE>>GIS view & analysis component.
(...)
As to INNOCONNECT SRO, it is agreed between the Parties that, to the best of
their knowledge, The following background is hereby identified and agreed upon
for the Project: (...) WebGLayer library (available at http://webglayer.org/).
(...)
As to CITY ZEN DATA, it is agreed between the Parties that, to the best of
their knowledge, The following background is hereby identified and agreed upon
for the Project: (...) Warp10 platform (www.warp10.io).
(...)
As to ATHENS TECHNOLOGY CENTER SA, it is agreed between the Parties that, to
the best of their knowledge, The following background is hereby identified and
agreed upon for the Project: (...)
TruthNest, which will be integrated as a service within PoliVisu through an
API to be provided by ATC
(...)
As to SPRAVA INFORMACNICH TECHNOLOGII MESTA PLZNE, PRISPEVKOVA ORGANIZACE, it
is agreed between the Parties that, to the best of their knowledge, The
following background is hereby identified and agreed upon for the Project:
(...)
Mathematical model of traffic network of roads in the city of Pilsen,
including a car traffic matrix (so- called CUBE software:
http://www.citilabs.com/software/cube/).
(...)
As to MACQ SA, it is agreed between the Parties that, to the best of their
knowledge, The following background is hereby identified and agreed upon for
the Project: (...)
M3 Demo version in Macq's cloud for development, not allowed to put online or
in production. Excluded: background and especially data which is not owned by
Macq or which it is not allowed to share.
(...)
As to PLAN4ALL ZS, it is agreed between the Parties that, to the best of their
knowledge, The following background is hereby identified and agreed upon for
the Project: (...) Smart Points of Interest (http://sdi4apps.eu/spoi/).
Open Transport Map (http://opentransportmap.info/).
Open Land Use Map (http://sdi4apps.eu/open_land_use/).
(...)
As to STAD Gent, it is agreed between the Parties that, to the best of their
knowledge, The following background is hereby identified and agreed upon for
the Project: (...)
Any software developed for the publication, analysis, harmonisation and/or
storage of data by the City, its ICT partner Digipolis, or any subcontractor
thereof.
(...)
## The PoliVisu licensing policy
There is at the moment no single licensing policy within the PoliVisu
consortium, either for the software (so-called Playbox) or their individual
components, some of which belong to the Background as mentioned in the
previous subparagraph. This is probably a topic of discussion for later
project stages. Likewise, there has been no explicit consideration of the data
licensing issue at the broad consortium level yet - which can be due to the
relatively early stage of the project’s lifespan and the limited number of
plenary meetings done so far.
However, a few building blocks can already be identified, based on the
discussion done in this document, the GA provisions quoted above as well as
others not quoted yet, and the individual partners declarations in the CA.
These provisions have been implicitly accepted by the PoliVisu consortium
members upon their signature of the aforementioned documents and are therefore
totally enforceable. They are summarized in the table below.
**Table 2. Building blocks of the PoliVisu licensing policy**
<table>
<tr>
<th>
**Typology of data**
</th>
<th>
**Licensees**
</th>
<th>
**During the project period**
</th>
<th>
**After the project period**
</th>
<th>
**Legal references**
</th> </tr>
<tr>
<td>
Pre-existing (e.g. part of the Background knowledge of PoliVisu, as listed in
the CA Attachment 1)
</td>
<td>
Other members of the
PoliVisu consortium
</td>
<td>
Royalty free usage
No right to sublicense
</td>
<td>
Under fair and reasonable conditions
</td>
<td>
GA Art. 25.2
GA Art. 25.3
</td> </tr>
<tr>
<td>
Any interested third party
</td>
<td>
As per the Background commercial licence
</td>
<td>
As per the Background commercial licence
</td>
<td>
CA Attachment 1
</td> </tr>
<tr>
<td>
Sourced from third parties for the execution of project activities (e.g.
portions of large datasets)
</td>
<td>
Other members of the
PoliVisu consortium
</td>
<td>
Royalty free usage
No right to sublicense
</td>
<td>
Within the scope of the third party’s license
</td>
<td>
General rules on IPR and license details
</td> </tr>
<tr>
<td>
Any interested third party
</td>
<td>
No right to sublicense
</td>
<td>
No right to sublicense
</td>
<td>
General rules on IPR and license details
</td> </tr>
<tr>
<td>
Freely available in the state of art (e.g. Open
Data)
</td>
<td>
Other members of the
PoliVisu consortium
</td>
<td>
Royalty free usage
</td>
<td>
Royalty free usage
</td>
<td>
Within the scope of the data owner’s license
</td> </tr>
<tr>
<td>
Any interested third party
</td>
<td>
Royalty free usage
</td>
<td>
Royalty free usage
</td>
<td>
Within the scope of the data owner’s license
</td> </tr>
<tr>
<td>
Newly produced 13 during the project (i.e. part of the Foreground knowledge
of PoliVisu)
</td>
<td>
Other members of the
PoliVisu consortium
</td>
<td>
Royalty free usage
No right to sublicense
</td>
<td>
Under fair and reasonable conditions
</td>
<td>
GA Art. 26.2
</td> </tr>
<tr>
<td>
Any interested third party
</td>
<td>
Open access at flexible conditions
</td>
<td>
Open access at flexible conditions
</td>
<td>
GA Art. 29.3
</td> </tr> </table>
## Special provisions for big datasets
The PoliVisu DoA describes how big data from different sources – notably
available at city level, in relation to the nature of the identified project
pilots, dealing with mobility and traffic flows – can distinctively contribute
to the three processes of policy experimentation belonging to its Framework:
design, implementation and (real time) evaluation of policy solutions 14 .
Big data, as defined in ISO/IEC CD 2046, is data stored in "extensive datasets
− primarily in the characteristics of volume, variety, velocity, and/or
variability − that require a scalable architecture for efficient storage,
manipulation, and analysis". This may include ‘smart data’, i.e. coming from
sensors, social media, and other human related sources. This obviously raises
questions about data security and privacy, which are explicitly and
extensively dealt with in a dedicated WP (1) and will ultimately become part
of a policy oriented manual, issued in two consecutive editions as
Deliverables D7.4 (due at month 24) and D7.6 (due at month 32). In another WP
(4), the PoliVisu DoA extensively deals with the smart data infrastructure for
cities that is now going to be developed within the project. This is based on
the Warp 10 big data architecture and will set up various data processing and
analytical steps. The general principle and modus operandi is that any (big)
data can be used in any application, can be analysed and correlated with other
sources of data and can be used to provide detection of patterns to understand
the effective functioning of infrastructures, transport systems, services or
process within a city. The processed and analysed big data will be published
as map services. Free and open source geospatial tools and services will be
used to generate OGC standards (especially WMS-T and WFS), TMS and vector tile
based open formats for integration in GIS applications.
The existing OTN traffic modelling tool will be automated and ported to a big
data processing cloud to yield near-real-time traffic calculations. The
process will be calibrated to make the traffic model algorithms more accurate
(in space and time) using real time and historical traffic sensor data. System
interfaces and GUI will be developed to interact with the traffic modelling
software.
Existing crowdsourcing tools (such as Waze and plzni.to) will be adopted and
complemented with standard interfaces, protocols and data models to turn user
generated data into actionable evidence for policy making. New modules will be
designed for the SensLog open source library to support its integration with
big data technologies.
Data analytics functions and algorithms will be implemented to support policy
making processes. Social Media analytics will be based on TruthNest or
TrulyMedia as an alternative option. This tool will be extended with a
monitoring mechanism for Twitter contents that gathers any information on
mobility trends automatically and in real-time and sends alerts to users on
possible events.
Open source geospatial software (such as WebGLayer) will be used to realise
the big data visualisation. The tool will be extended with support for line
and area features. Advanced visualisation components will be added in the form
of multiple linked views, filters through interactive graphs, parallel
coordinates relationship analysis, map-screen extent filters, and area
selection. Focus will be set on the visualisation and filtering of mobility
related information and the comparison between different scenarios, time
periods and locations, in particular on mobile and touch devices.
The appropriate metadata will be defined for supporting the different tools
and processes in real life decision making conditions. This includes the
structures, services, semantics and standards to support big data, sensor
data, advanced analytics and linked data. Two open source metadata tools will
be considered in the project: GeoNetwork and Micka. The consortium will
contribute to the definition of integrated metadata standards in the OGC
metadata workgroup.
Considering the above scenario, as well as the DoA statement that “PoliVisu
will treat the data as confidential and will take every precaution to
guarantee the privacy to participants, i.e., ensuring that personal data will
be appropriately anonymised and be made inaccessible to third parties” (Part
B, p. 102) the resulting, natural implication is that a number of
anonymization, aggregation, and blurring techniques must be tested well in
advance, and applied to sourced and produced datasets in dependence of the
requirements of the various project pilots. The results of this effort will be
released as two WP4 Deliverables, notably a White Paper on data anonymisation
issued in two consecutive editions, D4.5 at month 24 and D4.6 at month 30.
However, due to the key role played by anonymization in the context of the
PoliVisu project and the need to balance privacy and security with the policy
(end user) requirements of having usable datasets for e.g. traffic flows
measurement, detection of trends, or sentiment analysis, in the first edition
of the DMP it was highly recommended that the contents of this Section be
updated and integrated when this edition of the DMP would be published, namey
at month 12 of the work plan. This is the purpose of the following subsection.
## Special provisions for data anonymization
The main conclusion of D6.2 and the internal partner survey done in
preparation for it, has been that the majority of relevant datasets for the
three pilot cities are already available and those that aren't have the
potential to become so. However, further analyses and data preparations are
needed before the partners can combine available (and for the moment
unavailable/unusable) datasets, because of the wide variety in data standards
and the differentiating quality of the datasets. Among these additional steps
towards achieving ‘data readiness’, anonymization is a key process component.
According to Deliverable 6.2 conclusions, this aspect is particularly relevant
in the case of Gent, and more specifically for the student housing datasets.
In any case, the next statements are general in principle and need to be
adopted and translated into practice by all the members of the PoliVisu
consortium.
### Overview of available techniques
By anonymization, the legal and technical practice of data management intends
a collection of methods and tools to transform a collection of personal data
available to an organisation, in such a way to impede the establishment of a
transparent, and potentially harmful, connection between the information
contained in the dataset and the personal identity of the individual data
subject(s). Indeed, many techniques are known from the state of the art,
including (non exhaustively) the following:
○ **Attribute suppression** : consisting in the removal of an entire part of
a dataset (also referred to as “column” in spreadsheets);
○ **Record suppression** : consisting in the removal of an entire part of a
dataset (also referred to as “raw” in spreadsheets);
○ **Swapping** : consisting in the rearrangement of items in the dataset such
that the individual attribute values are still represented therein, but
generally, do not correspond to the original records. This technique is also
referred to as shuffling and permutation;
○ **Data perturbation** : consisting in a modification of the values from the
original dataset to be slightly different from the original ones;
○ **Character masking** : achieved by using a constant symbol (e.g. “*” or
“x”) in the place of a data point - typically, a name/surname string or other
recognizable elements of a record;
○ **Pseudonymisation** : achieved by using a pseudonym in the place of a
name/surname string, with or without the possibility of coming back to the
original identity of the data subject(s) - depending on whether the data owner
decides to keep track of the matching and store that information in a secure
place. This technique is also referred to as coding;
○ **Generalisation** : achieved by deliberately reducing the precision of
available data through e.g. converting a person’s age into an age range, or a
precise location into a less precise location. This technique is also referred
to as recoding;
○ **Aggregation** : achieved by converting a dataset from a list of records
to summarised values;
○ **Creation of a “synthetic” dataset** : achieved by using the original data
to generate a wholly different collection, instead of modifying the original
dataset.
### Limitations and risks
Generally speaking, all the above techniques achieve the goal of protecting
the personal identity of the data subjects, although their impact on the
integrity and usability of the underlying dataset is obviously different and
should be carefully considered prior to undertaking any irreversible action.
Nevertheless, an important element of reflection is related to the known fact
that all the aforementioned approaches are **static** \- in the sense that
they refer to an ideal (perhaps too idealistic) situation whereby the dataset
to be anonymised is fully available at the time, has a standard format and is
clean and complete; thus, the process of anonymisation can be successfully
performed rather easily with the support of one or more methods and tools and
especially once for all. Unfortunately this is not the case in the normal
practice, where datasets are incomplete, “dirty”, subject to continuous
revision and upgrade, including in few (but not infrequent) cases the addition
of new records/attributes and/or the initiation of new forms of data
collection such as audio/video, images, texts, geolocation, biometrics etc.
In such a **dynamic** \- not to say all chaotic - situation that
characterizes real life, even the implementation of either of the above
methods and tools can prove inappropriate and lead the whole anonymisation
process to unsatisfactory results. Not to mention the biggest risk of all,
that the anonymisation process itself is prone to failure and therefore, to an
unwanted disclosure of personal data; this, not necessarily because of
inherent mistakes in use or the improper use of some of the aforementioned
techniques, but more simply due to a careless management of security aspects
(such as when storing the information on pseudonyms in a not too secure place)
or even more simply, to an insufficient level of protection against the use of
third parties’ data sources to decipher some information related to people
that was supposed to be anonymised.
In such difficult context, the best (and perhaps the only) defence is
procedural: which means that in the case of PoliVisu pilots, the preparatory
stage outlined in Deliverable 6.2 should be complemented by the roll-out of a
visibly and transparently implemented anonymisation process. This is expected
to integrate the use of other techniques mentioned in D6.2 such as data
cropping, data cleansing and ETL, social media scraping and time series
creation.
# PoliVisu Data Management Plan
In this Section, the data usage scenarios presented in the Introduction are
used as a basis for discussing the key issues to be examined in relation to
each distinct paragraph of the PoliVisu DMP. As a reminder, the three
scenarios, which jointly compose the PoliVisu’s data management lifecycle,
are:
* Original data produced by the PoliVisu consortium and/or individual members of it (e.g. during a dissemination action or a pilot activity);
* Existing data already in possession of the PoliVisu consortium and/or individual members of it prior to the project’s initiation;
* Existing data sourced/procured by the PoliVisu consortium and/or individual members of it during the project’s timeline.
On the other hand, the datasets handled within the three above scenarios can
belong to either of these three categories:
* Confidential data (for business and/or privacy protection);
* Anonymised and Public data (as explained in the Introduction, these two aspects go hand in hand); ● Non anonymised data (the residual category).
In the following text, we reproduce the contents of Section 3 of D2.10 almost
unchanged, apart from some corrections of typos or equivalent modifications
aimed at improving readability and understandability. In case the reader is
already familiar with D2.10 contents, the remainder of this Section may be
skipped at all.
## Data summary
The following table summarizes the typologies and contents of data collected
and produced. For each distinct category, a preliminary list is provided in
Annex 1 to this document and will be updated in the next edition of the DMP,
due by month 24.
**Table 3. Summary of relevant data for the PoliVisu research agenda**
<table>
<tr>
<th>
**Nature of datasets**
**Data usage scenarios**
</th>
<th>
**Confidential**
</th>
<th>
**Anonymised and Public**
</th>
<th>
**Non anonymised**
</th> </tr>
<tr>
<td>
**Original data produced by the**
**PoliVisu consortium**
</td>
<td>
Raw
survey/interview/sensor data
Evidence from project pilots
Personal data of end users
New contacts established
</td>
<td>
Summaries of surveys/interviews
Data in reports of pilot activities
End user data on public display
Contact data within deliverables
</td>
<td>
Photos/videos shot during public events
Audio recordings (e.g.
Skype)
Data in internal repositories
</td> </tr>
<tr>
<td>
**Existing data already in possession of the PoliVisu consortium and/or
partners**
</td>
<td>
Data embedded in some of the Background solutions
(see par. 2.4.2 above) Contact databases
</td>
<td>
Data embedded in some of the Background solutions (see par.
2.4.2 above)
Website logs and similar metrics
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Existing data sourced/procured by the PoliVisu consortium and/or partners**
</td>
<td>
Raw data in possession of the Cities or of any third party involved in the
pilots
</td>
<td>
Free and open data (including from scientific and statistical publications)
</td>
<td>
N/A
</td> </tr> </table>
The main implications of the above table for the three usage scenarios are the
following, in **decreasing order of urgency** for the related action lines
as well as **increasing order of gravity** for the consequences of any
inadvertent behaviour by the members of the consortium:
* The organisation of Living Lab experimentations (as foreseen by the project’s work plan) implies that personal data handling of the end users acting as volunteers must be carefully considered, also for their ethical implications.
* For any photos/videos shot during public events, it is crucial to collect an **informed consent note** 15 from all the participants, with an explicit disclaimer in case of intended publication of those personal images on e.g. newspapers, internet sites, or social media groups. This will bring the data back into the Confidential category, where it is legitimate to store and/or process it for legitimate reasons.
* For any audio recordings stored, e.g. in the project’s official repository (currently Google Drive) or in individual partners’ repositories, care must be taken of the risk of involuntary disclosure and/or the consequences of misuse for any unauthorized purpose. Same goes for the personal data of each partner in the consortium.
* Informed consent forms must be signed (also electronically) by all participants in surveys, interviews and/or pilot activities. As an alternative option, the partner in charge will commit to anonymisation and other related measures as a way to protect the identity of the respondents/pilot users.
* Informed consent forms are also required when using available contacts (be they preexisting to the project or created through it) to disseminate information via e.g. newsletters or dedicated emails. In this respect, the GDPR provisions are particularly binding and must be carefully considered, at least in any doubtful case.
* As a general rule, access conferred to Background knowledge on a royalty free basis during a project execution does not involve the right to sublicense. Therefore, attention must be paid by each partner of PoliVisu to ensure the respect of licensing conditions at any time and by any member of the team.
* This also applies to any dataset sourced or procured from third parties during the PoliVisu project’s lifetime.
## Data collection
The following table summarizes the procedures for collecting project related
data. Annex 1 to this document (“Running list of data sources”) provides some
concrete examples of data usage scenarios. More of them will be added to the
third edition of the DMP, due by month 24, as allowed by the expected progress
of PoliVisu city pilots.
**Table 4. Summary of PoliVisu data collection procedures**
<table>
<tr>
<th>
**Nature of datasets**
</th>
<th>
**Confidential**
</th>
<th>
**Anonymised and Public**
</th>
<th>
**Non anonymised**
</th> </tr>
<tr>
<td>
**Data usage scenarios**
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Original data produced by the**
**PoliVisu consortium**
</td>
<td>
Surveys
Interviews
Pilot activities
F2F / distant interaction
</td>
<td>
Newsletters
Publications
Personal Emails
Open Access repositories
</td>
<td>
Events coverage - directly or via specialised agencies
A/V conferencing systems
Internal repositories
</td> </tr>
<tr>
<td>
**Existing data already in possession of the PoliVisu consortium and/or
partners**
</td>
<td>
Seamless access and use during project execution
</td>
<td>
Seamless access and use during project execution
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Existing data sourced/procured by the PoliVisu consortium and/or partners**
</td>
<td>
Licensed access and use during project execution
</td>
<td>
Free and open access and use during project execution
</td>
<td>
N/A
</td> </tr> </table>
An implication of the above table that may not have been evident in the
previous one, is that **every partner is responsible for the behaviour of
all team members** , which may also include subcontracted organisations (e.g.
specialised press agencies) or even volunteers. The latter circumstance does
not exempt the delegate of a certain job in case of improper application of
extant norms and rules.
All data will be collected in a digital form – therefore CSV, PDF, (Geo)JSON,
XML, Shape, spreadsheets and textual documents will be the prevalent formats.
In case of audio/video recordings and images, the most appropriate standards
will be chosen and adopted (such as .gif, .jpg, .png, .mp3, .mp4, .mov and
.flv). Ontologies will be created in Protégé file format (.pont and .pins) or
.xml/.owl can also be used. Website pages can be created in .html and/or .xml
formats.
Individually, each research output will be of manageable size to be easily
transferred by email. However, it is important to note that email transfer can
become a violation of confidentiality under certain circumstances.
## Data processing
The following table summarizes the procedures for processing PoliVisu related
data that can be envisaged at this project’s stage. As one can see, most of
them make reference to the contents of paragraph 2.6 above. In this sense,
more can probably be added to the cells of the table. Annex 1 to this document
(“Running list of data sources”) provides some meaningful case descriptions.
More of them will be added to the third edition of the DMP, due by month 24,
as allowed by the expected progress of PoliVisu city pilots.
**Table 5. Summary of PoliVisu data processing procedures**
<table>
<tr>
<th>
**Nature of datasets**
**Data usage scenarios**
</th>
<th>
**Confidential**
</th>
<th>
**Anonymised and Public**
</th>
<th>
**Non anonymised**
</th> </tr>
<tr>
<td>
**Original data produced by the**
**PoliVisu consortium**
</td>
<td>
Anonymisation
Visualisation
</td>
<td>
Statistical evaluation
Visualisation
</td>
<td>
Selection/destruction
Blurring of identities
</td> </tr>
<tr>
<td>
**Existing data already in possession of the PoliVisu consortium and/or
partners**
</td>
<td>
Anonymisation
Statistical evaluation
Metadata generation
</td>
<td>
Visualisation
Analytics
Publication as map services
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Existing data sourced/procured by the PoliVisu consortium**
</td>
<td>
Anonymisation
Statistical evaluation
</td>
<td>
Visualisation
Analytics
</td>
<td>
N/A
</td> </tr> </table>
<table>
<tr>
<th>
**and/or partners**
</th>
<th>
Metadata generation
</th>
<th>
Publication as map services
</th>
<th>
</th> </tr> </table>
Apart from the specific software listed in paragraph 2.6 above, state of the
art productivity tools will be used to process/visualize the data used or
generated during the project. Typically, the partners are left free to adopt
their preferred suite (such as Microsoft Office™ for PC or Mac, Apple’s iWork™
and OpenOffice™ or equivalent). However, the following tools are the ones
mainly used by the consortium:
* Google’s shared productivity tools (so-called G-Suite™) are used for the co-creation of outputs by multiple, not co-located authors.
* Adobe Acrobat™ or equivalent software is used to visualise/create the PDF files.
* Protégé™ or equivalent software is used to generate the ontologies.
* Photoshop™ or equivalent software are used to manipulate images.
* State of the art browsers (such as Mozilla Firefox™, Google Chrome™, Apple Safari™ and Microsoft Internet Explorer™) are used to navigate and modify the Internet pages, including the management and maintenance of social media groups.
* Cisco Webex™ or Skype™ (depending on the number of participants) are the selected tools for audio/video conferencing, which may also serve to manage public webinars.
* Tools like Google Forms™, and optionally SurveyMonkey™ and LimeSurvey™, are used for the administration of online surveys with remotely located participants.
* Dedicated Vimeo™ or YouTube™ channels can help broadcast the video clips produced by the consortium to a wider international audience, in addition to the project website.
* Mailchimp™ or equivalent software is helpful to create, distribute and administer project newsletters and the underlying mailing lists.
## Data storage
The following table summarizes the procedures for storing project related
data, during and after the PoliVisu lifetime, and the most frequently used
repositories. As for the previous paragraphs, we limit ourselves now to
listing the headlines. Annex 1 to this document (“Running list of data
sources”) provides some contents for the pilot descriptions. More of them will
be added to the third edition of the DMP, due by month 24, as allowed by the
expected progress of PoliVisu city pilots.
**Table 6. Summary of PoliVisu data storage procedures**
<table>
<tr>
<th>
**Nature of datasets**
**Data usage scenarios**
</th>
<th>
**Confidential**
</th>
<th>
**Anonymised and Public**
</th>
<th>
**Non anonymised**
</th> </tr>
<tr>
<td>
**Original data produced by the**
**PoliVisu consortium**
</td>
<td>
Individual partner repositories
Common project repository
</td>
<td>
Project website
Open access repository
</td>
<td>
Individual partner repositories
Common project repository
</td> </tr>
<tr>
<td>
**Existing data already in possession of the PoliVisu consortium and/or
partners**
</td>
<td>
Specific software repositories
</td>
<td>
Playbox components
Map services
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Existing data sourced/procured by the PoliVisu consortium and/or partners**
</td>
<td>
Individual partner repositories
Third party repositories
Cloud repositories
</td>
<td>
Playbox components
Map services
Cloud repositories
</td>
<td>
N/A
</td> </tr> </table>
Google Drive™ is the selected tool as Polivisu’s data and information
repository. This include both the project deliverables (including relevant
references utilised for their production or generated from them as project
publications, e.g. journal articles, conference papers, e-books, manuals,
guidelines, policy briefs etc.) and any other related information, including
relevant datasets. This implies that the privacy and security measures of
Google Drive™ must be GDPR compliant. The verification of such circumstance is
the responsibility of the coordinator.
Additionally, the coordinator will make sure that the official project
repository periodically generates back-up files of all data, in case anything
may get lost, corrupted or become unusable at a later stage (including after
the project’s end). The same responsibility goes to each partner for the local
repositories utilised by them (in some cases, these are handled by large
organisations such as Universities or Municipalities; in others, by SME or
even personal servers or laptops).
Collectively, we expect the whole set of outputs to reach the size of 500-600
Gb all along the project duration. This range will particularly depend on the
number and size of the received datasets to be utilised for the execution of
PoliVisu pilots.
Whatever the license that the consortium establishes for final datasets, their
intermediate versions will be deemed as **business confidential** , and
restricted to circulating only within the consortium.
Finally and as stipulated in the DoA, each digital object identified as R&D
result, including their associated metadata, will be stored in a dedicated
open access repository managed by POLIMI, to the purpose of both preserving
that evidence and making it more visible and accessible to the scientific,
academic and corporate world.
The next edition of this DMP will provide additional details on such open
access repository.
In addition to POLIMI open access server, other datasets may be stored on the
following repositories: ● Cordis, through the EU Sygma portal
* The PoliVisu website (with links on/to the Social Media groups)
* Individual Partner websites and the social media groups they are part of
* The portals of the academic publishers where scientific publications will be accepted
* Other official sources such as OpenAIRE/Zenodo 16 and maybe EUDAT 17 ● Consortium’s and Partners’ press agencies and blogs ● PoliVisu official newsletters.
## Data sharing
Last but not least, the following table summarizes the procedures for sharing
PoliVisu related data in a useful and legitimate manner. When sharing, it is
of utmost importance to keep in mind, not only the prescriptions and
recommendations of extant rules and norms (including this DMP), as far as
confidentiality and personal data protection are concerned, but also the risk
of voluntary or involuntary transfer of data from the inside to the outside of
the European Economic Area (EEA).
In fact, while the GDPR applies also to the management of EU citizens personal
data (for business or research purposes) outside the EU, not all the countries
worldwide are subject to bilateral agreements with the EU as far as personal
data protection is concerned. For instance, the US based organisations are
bound by the so-called EU-U.S. Privacy Shield Framework, which concerns the
collection, use, and retention of personal information transferred from the
EEA to the US. This makes the transfer of data from the partners to any US
based organisation relatively exempt from legal risks. This may not be the
same in other countries worldwide, however, and the risk in question is less
hypothetical than one may think, if we consider the case of personal sharing
of raw data with e.g. academic colleagues being abroad for the purpose of
attending a conference. It is also for this reason that the sharing of non-
anonymised data is discouraged for whatever reason, as shown in the table.
**Table 7. Summary of PoliVisu data sharing procedures**
<table>
<tr>
<th>
**Nature of datasets**
**Data usage scenarios**
</th>
<th>
**Confidential**
</th>
<th>
**Anonymised and Public**
</th>
<th>
**Non anonymised**
</th> </tr>
<tr>
<td>
**Original data produced by the**
**PoliVisu consortium**
</td>
<td>
Personal email communication
Shared repositories
</td>
<td>
Project website
Open access repository
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Existing data already in possession of the PoliVisu consortium and/or
partners**
</td>
<td>
Personal email communication Shared access to software
repositories
</td>
<td>
Shared access to Playbox components Map services
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Existing data sourced/procured by the PoliVisu consortium and/or partners**
</td>
<td>
Personal email communication
Shared repositories
</td>
<td>
Shard access to Playbox components Map services
</td>
<td>
N/A
</td> </tr> </table>
As for the previous tables, Annex 1 to this document (“Running list of data
sources”) provides some initial case descriptions. More of them will be added
to the third edition of the DMP, due by month 24, as allowed by the expected
progress of PoliVisu city pilots.
# Conclusions and Future Work
This document is the second in a row of four planned deliverables concerning
the PoliVisu Data Management Plan (DMP) in fulfillment of the requirements of
WP2 of the project’s work plan. The main reason for planning four versions of
the DMP (at months 6, 12, 24 and 36) and particularly two of them during the
first project year, is evidently related to the need to hold on until the
development as well as piloting activities of PoliVisu gain further momentum,
in order to:
* Secure the current, proposed structure of contents against any changes suggested by the gradual and incremental start up of the core project activities, and
* Colour the already existing contents with important add-ons based on the learning process that the Polivisu partners will activate throughout the project’s lifetime, considering also that most of project work will be oriented to operationalizing the connection between data handling (including analytics and visualization) and the policy making cycle outlined in deliverable D3.2 (also resting under POLIMI responsibility, like the present one).
In the first edition of the DMP (deliverable D2.10) we basically envisaged
three main data usage scenarios, which jointly compose PoliVisu’s data
management lifecycle:
* Original data produced by the PoliVisu consortium and/or individual members of it (e.g. during a dissemination action or a pilot activity)
* Existing data already in possession of the PoliVisu consortium and/or individual members of it prior to the project’s initiation
* Existing data sourced/procured by the PoliVisu consortium and/or individual members of it during the project’s timeline.
That edition of the DMP has, in our opinion, fulfilled the immediate goals of
such a stepwise approach to data management, by:
* Presenting the legislative and regulatory framework, shaping the external context of this DMP in a relatively immutable manner, at least within the timeframe of the PoliVisu project;
* Identifying the fundamental principles of FAIR data handling according to the EC requirements and that the PoliVisu consortium and individual partners are bound to respect;
* Proposing a unitary description of the PoliVisu data management lifecycle, a precise requirement of the DoA and that has been the leitmotif and conceptual architrave of the whole document;
* Summarizing the key aspects of data collection, processing, storage and sharing (the typical contents of a DMP) within the proposed lifecycle elements and particularly highlighting - first and foremost, to the attention of the partners - some key aspects of data management that go beyond the operational link with open access policy (the likely reason why this deliverable has been assigned to POLIMI) and interfere with privacy and security policies (an ethical topic falling under the competence of WP1) as well as with the way background knowledge and tools will be developed, deployed and customised to serve the needs of the city pilots (a topic entirely covered by the WP4 team).
All the above contents originated in D2.10 have not been removed from this
document, but only aligned with and supplemented by some extra items within
this second edition of the DMP, notably reflecting progress in the
consortium’s and partners’ reflections on some aspects left unattended and
explicitly mentioned in the conclusions of the Deliverable 2.10, namely:
* Integration of the baseline analysis done in D6.2 until month 7;
* Compilation of Annex 1 “Running list of data sources”;
* Provision of case descriptions for Section 3;
* A more careful consideration of data anonymisation related aspects in Section 2.
In so doing, some of the contents left unattended or only partly covered by
the previous edition of this DMP have been covered, which reflect:
1. A stronger integration of partners contributions in the texts composing this document, although with the twin filter of the responsible author (POLIMI) who provided most of the contents by now, with other partners acting as external reviewers, and the table provided as Appendix to Deliverable 6.2, which has been “scrapped” by the responsible author to create Annex I to this DMP.
2. A deeper connection with the topic of data handling in other deliverables that this one only, and particularly: D1.1 (Ethical Requirement No. 4), D1.3 (Ethical Requirement No. 3), D2.2 (Project Management Plan), D2.3 (Quality and Risk Plan), D6.1 (Pilot Scenarios), D7.1 (Evaluation Plan) and D8.1 (Impact Enhancement Road Map). This in order not to miss any precious information while at the same time avoiding duplications and inconsistencies in the respective structure and contents.
Still, this document can be found lacking in a variety of respects, which will
be gradually covered within the forthcoming editions of it. They include the
following:
3. While commenting the TOC of D2.10, some partners proposed a more detailed consideration of the following topics: open standards, open data licensing, and consortium level policies. The latter aspect has been partly dealt with by reconstructing “ex post” some provisions of the GA and CA that are already binding for all partners. However, it is certainly worthwhile to make a more explicit and (to some extent) forward looking plan of e.g. what kind of licenses should be part of all the output categories making up the project results. It is also in that context that the issues of open standards and open data licenses (other than those belonging to the open access scheme) may be more extensively dealt with.
4. Another missing indication is surely that of the partners responsible for the various steps of data management. At the moment, the crucial question of “who is in charge of” collecting, processing and storing data for each partner or deciding to limit or allow full access to some datasets, is subject of future decision making and will also depend on the maturity level of the pilot partners involved and strategic decisions when designing the PoliVisu platform. This question is not trivial (the answer equating the members of each partner team, or the heads of the teams, with the “people in charge” is by no means acceptable, giving too many things for granted, including the lack of hierarchies and other sorts of complexity within each partner’s organisation). In fact, some early work within the consortium has been dedicated creating a working group of the Data Protection Officers of each participant organisation. However there is more in between, and it will be the task of the next DMP edition to dig into the issue, thus contributing to the specialisation and clarification of the use cases now presented very superficially, in table form, within the preceding Section 3.
5. A final, indispensable aspect to be covered by a DMP is obviously the post-project scenario. What is the consortium’s and individual partners’ foresight of the management of pilot related datasets and more generally, of all the datasets created during the project’s lifetime that - for legitimate reasons, first and foremost exploitation related - are not subject to immediate publicity and may nonetheless require considerable attention and care to be maintained and preserved? Arguably the PoliVisu work plan is at a too early stage to enable a firm definition of these aspects. However with the progress of activities (and time), we expect that the operational links created at pilot level between (big) data handling, the behaviours of people involved in the Living Lab experimentations, and the three stages of the PoliVisu policy cycle will start generating insights and enable the collection of evidence in view
of the broader dissemination and exploitation phases of the project.
As for now, it is still possible to conclude - as we did in the previous
edition of this Deliverable - that it would be a great result if the PoliVisu
DMP could enable all partners to understand the different action items that
handling with data of different nature, origin and “size” imply for anyone
wanting to stay in a “safe harbour” while actively contributing to the
successful achievement of the pilot and project outcomes.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1343_INTEND_769638.md
|
# Executive summary
This document establishes a Data Management Plan for the INTEND project. It
has been implemented by Coventry University Enterprises Project Coordination
Team with the review from INTEND Work Package Leaders, and describes how data
will be collected, processed and generated during and after the project
lifetime.
This document presents the basic descriptions of users and groups logical
structure with roles. It also elaborates whether and how the data will be
protected (due to intellectual property, personal or security reasons) or made
open.
# Introduction
The INTEND project participates to the Open Research Data Pilot (ORD pilot).
The ORD pilot aims to improve and maximise access to and re-use of research
data generated by Horizon 2020 projects. A Data Management Plan is required
for all projects participating in the extended ORD pilot.
A Data Management Plan is a key element of good data management. It describes
the management life cycle of the data collected and produced within the
project. As part of making research data findable, accessible, interoperable
and re-usable (FAIR), the Data Management Plan will need to include
information on:
* the handling of research data during and after the end of the project
* the type of data collected, processed and generated
* the methodology applied
* whether the data will be shared or made open access
* the way data will be curated and preserved, also after the end of the project.
Therefore, the INTEND Data Management Plan will sensitize project partners on
data management, give some common data management rules, make it easier to
find project data when needed.
# Purpose of data collection
The overall objective of the INTEND project is to deliver an elaborated study
of the research needs and priorities in the transport sector utilising a
systematic data collection method. Therefore, the project collects and
generates data for internal use and further processing by the project partners
to produce analysis, reports and plans, as well as data that will be made
accessible for external users such as the project deliverables, project
summaries stored in the transport research database and communication and
dissemination material.
The following sections elaborate on the purpose of data collection, the origin
and types of data, the format and storage methods and concentrate on Work
Packages 2 to 5. This is because Work Packages 1 and 6 only comprise
confidential data. Data handling under WP6 is described in Deliverables D.1
and D.2.
## Data collection in WP2
The data collected in Work Package 2 supports an annotated literature review
to identify future transport technologies and mobility concepts with a time
horizon of 2020-2035 and the main political imperatives and visions regarding
transport. This will result in the production of three deliverables:
* D2.1 Transport projects & future technologies synopses handbook
* D2.2 Report on key transport concepts of the future
* D2.3 Report on political imperatives
All this data is qualitative and will come from the reviewed literature at
European and international level, i.e. sponsored research projects, scientific
publications, forward looking exercises, industry studies and strategic
research agendas, with emphasis on transport. In particular, the data in
**D2.1 handbook** will consist of project descriptions, brief summaries that
will be categorised under one of the four transport modes (road, aviation,
rail and maritime) and some descriptive statistics (such as percentages of
projects reviewed per mode, percentages of projects for freight and passenger
sector, percentages of thematic areas per mode, most cited technology themes).
In order to organise and summarise the amount of literature that will be
reviewed in D2.1, an excel spreadsheet has been created which will be referred
as the “projects & report synopsis template”. This template includes four
transport modes divided into three sheets per mode: The reports sheet is used
purely for inputting the technologies that has been identified from
reports/roadmaps/papers/ technology websites. This sheet is not related to
funded research projects. The reports can be from some transport stakeholder,
or consultancy firm or some university, or a company. Same goes for roadmaps.
The projects sheet is used purely for inputting the technologies that have
been researched by research funded projects. These technologies are identified
through the project’s final report summary reports, projects and project
acronyms.
The thematic areas are Competitiveness, Environment, Energy, Infrastructure,
Systems. Each thematic area is also divided into cluster areas that describe
specific aspects, i.e. competitive production of road vehicles, reduction of
emissions, optimisation of resistance and propulsion. Each transport area has
its own clusters which are specific to its characteristics in order to
categorise the reviewed literature in a better way.
Only qualitative data is collected into the template, consisting of the
technologies that have been identified from each project/report and a brief
description of the results. The project acronyms sheet’s purpose is only to
assist with the descriptive statistics by collecting the thematic areas that
each project belongs to, its funding scheme and whether the project is
relevant to the passenger or freight sector.
All the results coming from D2.1 report will be publicly accessible via the
transport research database on the project website (for more details, please
see section 3).
The data in **D2.2 report** will consist of purely qualitative data in the
form of literature review and analysis of future mobility and key transport
concepts stemming from forward looking projects, reports and studies. An
evaluation will be carried out to present what mobility concepts are more
relevant to each transport mode.
The data in **D2.3 report** will be collected, processed and analysed using
the Software Tool Atlas.ti that has been designed for qualitative data
analyses. It will consist of information extracted from documents only. TUB
will scan reports on political imperatives related to the transport sector
from Europe and worldwide. With the help of the mentioned software tool whole
texts will be collected and indexed to create a topic-related bibliography.
Coding concrete statements and passages inside the text will then later on
allow to analyse which statements were mentioned together with others, how
often and whether distinctive features in terms of spatial origin or type of
originator can be observed. This will allow the creation of an overview
listing the most important political imperatives.
## Data collection in WP3
The data collected in Work Package 3 provides the definition and evaluation of
the most important trends (Megatrends) and technological advances in passenger
and freight transportation, which impact the realization of the transport
concepts of the future and the political imperatives identified in Work
Package 2. This will result in the production of two deliverables:
* D3.1 Report on the main Megatrends
* D3.2 Report on Megatrends validation and impact assessment
The data will come from:
* a thorough literature review looking at relevant European, global and national projects, academic literature, reports from business sector, consultancy firms and worldwide research organizations elaborating megatrends in forward looking transport projects, etc.
* ad hoc queries conducted through online questionnaire to 90 people through the LimeSurvey software
* optional webinar offered to the people who want to participate to the online questionnaire
The data in **D3.1 report** will include views of experts in terms of key
mega-trends or factors that are expected to influence both passenger and
freight transport systems. These views and analysis have already been
published and are publicly available. Therefore, numerous sources will be
elaborated in an effort to study the factors of evolution of future transport
concepts. Within each source we will identify megatrends elaborated therein.
The sources are relevant EC projects, worldwide forward looking transport
projects, national projects, academic literature and reports from business
sector and consultancy firms.
The data in **D3.2 report** will consist of the outputs from WP2 (identified
technological advances – from Task 2.1, political imperatives – elaborated in
Task 2.2 and key transport concepts of the future – described in Task 2.3) and
Task 3.1 (identified most relevant megatrends) and the data collected through
two questionnaires. All the results from WP2 and task 3.1 will be grouped into
clusters with an aim to evaluate the potential impact of all elements in
clusters on priorities of the key transport concepts of the future. The first
questionnaire will be used for defining relationships between elements in the
clusters. For this purposes, we will use LimeSurvey software. The second
questionnaire will enable us to get estimations of relationships between all
the elements in clusters and between the clusters. This questionnaire will be
integrated within ANP engine intelligence, an application based on PHP, MySQL
and Python scripts and developed by FTTE.
## Data collection in WP4
The data collected in Work Package 4 provides guidelines for a forward-looking
transport sector based on an understanding of the nature of the systemic
change in the sector and the research needs arising from it. This will result
in the production of three deliverables:
* D4.1 Sketch of future transport system
* D4.2 Gap Analysis
* D4.3 Transport research agenda: blueprint on transport research needs, priorities and opportunities
The data will come from:
* desk research that will put together the results coming from WP2 and WP3, along with additional research about technology, infrastructure and policy using studies about the future of mobility, international research agendas, international transport policy strategies, projects and literature
* qualitative interviews to 15-20 experts using the license-based software MAXQDA
* online survey to 100-150 people using the online survey tool Unipark
The data processed in **D4.1 report** will consist of results, coming from WP2
(major transport concepts and developments of the future, identified in
existing literature) and WP3 (major influencing megatrends and their impacts
on the transport system, identified in existing literature). Additional data
used in D4.1 will consist of research-specific text passages, generated
through a desk research about technology, infrastructure and policy using
studies about the future of mobility, international research agendas,
international transport policy strategies, projects and literature that are
open to the public.
Further data in D4.1 will include qualitative data from expert interviews in
order to formulate hypothesis concerning the evolutionary development of the
transport system and to identify potential game changers for the future. This
data will be analyzed, evaluated and processed using the license-based
evaluation software MAXQDA. The hypothesis compiled on the basis of the
qualitative data resulting from the expert interviews will further on serve as
a
basis to conduct an online survey (on a broader level) on impacts of potential
game changers for the transport system, consisting of open as well as closed
questions. Thereby, further qualitative data will be generated, which will be
analyzed, evaluated and processed using the online survey tool Unipark as well
as common office applications for more specific evaluations and
interpretations.
The data in **D4.2 report** will include priorities about what technologies to
develop and megatrends to research by taking into account political
imperatives and characteristics of future transport system. Data in Gap
analysis will be based on the combination of defined different technological
advances and megatrends impacts on specific characteristics of the future
transport concepts.
The data in **D4.3 Transport research agenda** will consist of a compilation
of the key findings from previous work packages and the results from D4.1 and
D4.2. This will result in 1) an identification of Blind spots, which prevent
innovation beyond mainstream research and 2) future priority research needs,
including promising fields and possible synergies that contribute to general
social and economic development.
## Data collection in WP5
The data collected in other Work Packages is the base for Work Package 5. This
delivers the formal structure and processes to enable an effective
communication and dissemination of the knowledge gathered during the project
as well as the outputs produced during its lifetime. This will result in the
production of three deliverables:
* D5.1 Dissemination and exploitation strategy plan
* D5.2 Data Management Plan
* D5.3 Web tools
Due to the nature of the activities in the Work Package, it will produce a
wide range of data and documents:
* printed communication material like flyers and brochures
* online communication material like the project website (including news in short and long version), Facebook, LinkedIn and Twitter contributions, 3 newsletters
* publications and papers related to the work done and the results achieved in the project
* communications with TRIMIS, green car congress, UITP, ELTIS and with nonacademic technology newsites to further disseminate the results of the project
* database of people registered on the website to receive the INTEND newsletters
* project documents repository containing project material for internal use only
* transport projects database, accessible via the website and containing mainly the transport projects reviewed in D2.1 report
* public deliverables and other materials published on the project website
* data included in the Transport Synopsis Tool
* presentations in power point or pdf format related to project consortium meetings and events attended.
In addition to this, we also have all dissemination activities that are
documented in the Dissemination and exploitation strategy plan.
## Data formats and storage methods
The general principle followed in the INTEND project is that all the
confidential documents and data will be saved in the projects documents
repository accessible via the project website just by the partners through a
password, while the public documents and data will be published on the project
website.
The confidential documents and data are:
* Confidential deliverables (essentially the deliverables related to WP1 and WP6)
* Deliverable drafts and working documents
* Tasks guidelines
* Project meetings presentations, agendas and minutes
* Other confidential documents like Consortium Agreement, Grant Agreement, Project budget
The public documents and data are:
* Raw data from the research activities in WP3 and WP4 (online questionnaire, webinar, interviews, survey)
* All public deliverables (essentially all deliverables for WP2, WP3, WP4 and WP5)
* The transport projects database (with all the projects included in D2.1 report)
* Communication material like flyer, brochure and newsletters
* Openly accessible papers
* Dissemination events presentations
All the documents and datasets saved in the internal repository and on the
website, including all deliverables, will also be stored by CUE.
The documents generated in the project will be generally in MS word, pdf or
power point format. The data set will be in MS Excel and/or other common
format for their further use (please see detailed summary table in section
2.2).
The generated data sets will be stored as MS Excel files in the internal
repository. The analysis of the metadata (normally summary graphs) will be
available within the relevant deliverables as MS Word and Adobe Reader pdf
files and published on the project website. All data sets and documents will
be backed up and stored for at least five years by CUE.
Other raw data that will be generated is related to the megatrends Analytic
Network Process matrices and the analysis with the relevant software as
described in WP3. These will all be stored in file formats according to the
software and type of data in the internal repository.
Further raw data will be generated in WP4 through conducting qualitative
expert interviews (using the license-based evaluation software MAXQDA) and an
online survey (using the online survey tool Unipark). All raw data resulting
from this will be stored in common file formats such as XLS, CSV or SPSS for
their further use in MS office applications and will be stored in the internal
repository.
In general, all raw data will be stored in the internal repository and where
relevant on Zenodo After being analyzed, it will be included in the relevant
deliverables.
Data files of the transport projects database will be stored in adequate data
formats on the web server. Users of the transport research database will have
read-only permission. All the data stored on the web server will exist for
five years: this will be established in a contract with the company hosting
the website. After five years the data will be deleted. However, the database
will also be saved on TRIMIS and Zenodo so it will exist after the duration of
the project website.
As regards the collected data sets for the Transport Synopsis Tool, this will
have to be named and stored separately as raw data. The Transport Synopsis
Tool that will be hosted on a separate website shows a graphical
representation of the data developed in WP3. This graphical representation is
accessible for everyone. TUB will have password-protected access to this data
stored on the corresponding webserver.
# FAIR data
The data collected and generated in the project should be made findable,
accessible, interoperable and re-usable (FAIR).
## Making data findable, including provisions for metadata
INTEND data will be **findable** with metadata. This will include the key
elements for citing and/or searching any project dataand will make the data
and documents more visible by any interested party.
Moreover, the majority of the material produced, including all public
deliverables, will be findable on the project website, on TRIMIS and on
Zenodo.
It will be named according to the following convention which includes clear
version numbers: Project name + item name + version number. For example for
the deliverables it will be: INTEND_Dx.x_shorttitle_vx.x.docx (or
.pptx/.xlsx/.pdf…).
Search keywords will be provided to optimize possibilities for re-use. These
will be: megatrends, futurology, transport research, transport research
agenda, transport projects, transport of the future.
## Making data openly accessible
The INTEND data will be **accessible** mainly through the public deliverables
that will be published on the project website, on TRIMIS and on Zenodo. The
confidential deliverables and other confidential data will be available in the
documents repository accessible through the website just by the project
partners. Please see below a list of the INTEND deliverables and relevant
important data and its level of open accessibility:
<table>
<tr>
<th>
**Dataset**
</th>
<th>
**Dissemination**
**level**
</th>
<th>
**Format**
</th>
<th>
**Repository**
</th> </tr>
<tr>
<td>
D1.1 Project Manual
</td>
<td>
Confidential
</td>
<td>
pdf
</td>
<td>
Internal repository
</td> </tr>
<tr>
<td>
Project meetings presentations
</td>
<td>
Confidential
</td>
<td>
MS power
point or pdf
</td>
<td>
Internal repository
</td> </tr>
<tr>
<td>
Project meeting agendas
</td>
<td>
Confidential
</td>
<td>
pdf
</td>
<td>
Internal repository
</td> </tr>
<tr>
<td>
Project meeting minutes
</td>
<td>
Confidential
</td>
<td>
pdf
</td>
<td>
Internal repository
</td> </tr>
<tr>
<td>
Other confidential data like
Consortium Agreement, Grant Agreement, Project budget
</td>
<td>
Confidential
</td>
<td>
pdf
</td>
<td>
Internal repository
</td> </tr>
<tr>
<td>
D1.2 Quality Assurance Plan and Risk Management Plan
</td>
<td>
Confidential
</td>
<td>
pdf
</td>
<td>
Internal repository
</td> </tr>
<tr>
<td>
D2.1 Transport projects & future technologies synopses handbook
</td>
<td>
Public
</td>
<td>
pdf
</td>
<td>
Shared on INTEND website, TRIMIS and Zenodo
</td> </tr>
<tr>
<td>
Transport projects data
</td>
<td>
Public
</td>
<td>
To be discuss ed
</td>
<td>
Shared on the website, TRIMIS and Zenodo
</td> </tr>
<tr>
<td>
D2.2 Report on key transport concepts of the future
</td>
<td>
Public
</td>
<td>
pdf
</td>
<td>
Shared on INTEND website, TRIMIS and Zenodo
</td> </tr>
<tr>
<td>
D2.3 Report on political imperatives
</td>
<td>
Public
</td>
<td>
pdf
</td>
<td>
Shared on INTEND website, TRIMIS and Zenodo
</td> </tr>
<tr>
<td>
D3.1 Report on main Megatrends
</td>
<td>
Public
</td>
<td>
pdf
</td>
<td>
Shared on INTEND website, TRIMIS and Zenodo
</td> </tr>
<tr>
<td>
D3.2 Report on Megatrends validation and impact assessment
</td>
<td>
Public
</td>
<td>
pdf
</td>
<td>
Shared on INTEND website, TRIMIS and Zenodo
</td> </tr>
<tr>
<td>
Raw data from online questionnaires and webinars
</td>
<td>
Public
</td>
<td>
To be decided
</td>
<td>
Internal repository and Zenodo
</td> </tr>
<tr>
<td>
D4.1 Sketch of the future transport system
</td>
<td>
Public
</td>
<td>
pdf
</td>
<td>
Shared on INTEND website, TRIMIS and Zenodo
</td> </tr>
<tr>
<td>
D4.2 Gap Analysis
</td>
<td>
Public
</td>
<td>
pdf
</td>
<td>
Shared on INTEND website, TRIMIS and Zenodo
</td> </tr>
<tr>
<td>
D4.3 Transport research agenda: blueprint on transport research
needs, priorities and opportunities
</td>
<td>
Public
</td>
<td>
pdf
</td>
<td>
Shared on INTEND website, TRIMIS and Zenodo
</td> </tr>
<tr>
<td>
Raw data from interviews and surveys
</td>
<td>
Public
</td>
<td>
To be decided
</td>
<td>
Internal repository and Zenodo
</td> </tr>
<tr>
<td>
D5.1 Dissemination and exploitation strategy
plan
</td>
<td>
Public
</td>
<td>
pdf
</td>
<td>
Shared on INTEND website, TRIMIS and Zenodo
</td> </tr>
<tr>
<td>
INTEND flyer
</td>
<td>
Public
</td>
<td>
pdf
</td>
<td>
Shared on INTEND website
</td> </tr>
<tr>
<td>
INTEND brochure
</td>
<td>
Public
</td>
<td>
pdf
</td>
<td>
Shared on INTEND website
</td> </tr>
<tr>
<td>
INTEND newsletters
</td>
<td>
Public
</td>
<td>
pdf
</td>
<td>
Shared on INTEND website
</td> </tr>
<tr>
<td>
Papers
</td>
<td>
Public
</td>
<td>
pdf
</td>
<td>
Shared on INTEND website and
Zenodo
</td> </tr>
<tr>
<td>
Dissemination event presentations
</td>
<td>
Public
</td>
<td>
pdf
</td>
<td>
Shared on INTEND website
</td> </tr>
<tr>
<td>
D5.2 Data Management Plan
</td>
<td>
Public
</td>
<td>
pdf
</td>
<td>
Shared on INTEND website, TRIMIS and Zenodo
</td> </tr>
<tr>
<td>
D5.3 Web tools
</td>
<td>
Public
</td>
<td>
pdf
</td>
<td>
Shared on INTEND website, TRIMIS and Zenodo
</td> </tr>
<tr>
<td>
Data included in the Transport Synopsis Tool
</td>
<td>
Public
</td>
<td>
To be discuss ed
</td>
<td>
Shared on the H2020 project
Transport Synopsis website
(accessible via the INTEND website)
</td> </tr>
<tr>
<td>
D6.1 H - Requirement No. 1
</td>
<td>
Confidential
</td>
<td>
pdf
</td>
<td>
Internal repository
</td> </tr>
<tr>
<td>
D6.2 H - Requirement No. 2
</td>
<td>
Confidential
</td>
<td>
pdf
</td>
<td>
Internal repository
</td> </tr>
<tr>
<td>
Deliverables drafts and working documents
</td>
<td>
Confidential
</td>
<td>
MS word
</td>
<td>
Internal repository
</td> </tr>
<tr>
<td>
Tasks guidelines
</td>
<td>
Confidential
</td>
<td>
MS word
</td>
<td>
Internal repository
</td> </tr> </table>
As indicated on the table, no specific methods or software are needed to
access the data as the majority will be in a format that is accessible with
software of common use.
The data will not contain private information about transport sector
stakeholders or participants to the project activities. This refers in
particular to:
* individual results of the online survey of WP3
* the contents shared during the webinar in WP3
* the reporting of the interviews to the experts in WP4
* individual results of the survey in WP4
In general, data sharing will comply with privacy and ethics guidelines of the
project.
Moreover, there is no way of identifying a person accessing the data in
neither the project website, TRIMIS nor the Zenodo repository.
## Making data interoperable
The INTEND data will be **interoperable** . It will adhere to standards for
formats compliant with available software applications and will be accessible
through the project website, Zenodo and TRIMIS. This will allow data exchange
and re-use between researchers, institutions, organisations and countries.
All data will use standard vocabularies; if specific or new words generated by
the project are used, definitions will be provided. In addition to this, all
deliverables include at the beginning of the document a list of abbreviations.
## Increase data re-use (through clarifying licenses)
The public data will be **re-usable** by third party, also after the end of
the project, with appropriate reference to the INTEND project. The public
deliverables will be published on the project website, on TRIMIS and on Zenodo
once they are approved by the European Commission.
The data remains always re-usable. The data saved on the website will be
available for 5 years, however it will be available on TRIMIS and Zenodo
indefinitely.
The INTEND project doesn’t describe any data quality assurance processes. Data
quality is asserted during the implementation of each task by the respective
project partner.
# Procedures for data collection, storage, protection, retention and
destruction
All the data sets will be managed in line with the Guidelines on Open Access
to Scientific Publications and Research Data in Horizon 2020.
A central database will be developed on the project’s website. All the
aforementioned literature for D2.1 report will be stored centrally in order to
maximise standardization of the collected files which will have to be titled
and categorised according to mode or theme. This database will help project
partners to share and easily access literature, but will also allow external
persons to gather data about the several transport projects. It will be
accessible via the project home page (www.intend-project.eu) and function like
a Wiki, allowing its user to browse through the different thematic fields of
D2.1 report in order to access the project summaries. A tab for each transport
mode will be available where the user can click on each mode and get the full
list of projects that are relevant. The projects will be classified by
thematic areas (competitiveness, environment, energy, infrastructure, systems
based on D2.1 report) and technology cluster, while the user will be able to
see the technologies that each project has researched. In addition, the user
will be able to filter the results by thematic areas and sector
(passenger/freight). This will offer a brief thesaurus of projects and
technologies that have been covered in D2.1 report. The Project Coordinator
will be responsible for the management of this “green” database / platform.
Together with the _transport projects database_ accessible by everybody, there
will be a **project documents repository** that will contain all project-
related data like confidential and public deliverables, other confidential
documents, working documents related to project tasks, project meetings
presentations. It will be accessible via the project homepage just by the
INTEND partners; they will have a personal password-protected account to
access the repository. The main purpose of this repository is to have an
actual database that will help optimizing data exchange and collaborative work
processes in the project, as well as store confidential documents. As already
mentioned above, data determined to be accessible for an external audience
(like public deliverables, electronic leaflets, transport projects, future
technologies synopses handbook) will be made freely accessible to the public
or interested parties separately via the homepage of the project website.
A **database of people** will also be developed within WP5. This will contain
a list of people registered on the website to receive the INTEND newsletters.
The subscribers only needs to enter their first and last name and their email
address. This information will be stored in form of a list on the INTEND
webpage server in the documents repository. As it contains some sensitive
information, it will be managed and treated in full compliance of data
protection national and EU legislation. In specific, the subscribers have to
confirm the subscription via an automated mail and they also receive a mail
confirmation with the information on how to cancel the subscription. They can
easily unsubscribe via the link presented on the newsletter or by sending an
email to the contact person(s) so that they can manually be remove from the
list of subscribers. Only CUE, TUB and CERTH and the external subcontractor
will have access to this list.
As regards the **online questionnaire** expected to take place within WP3, all
the input data will be obtained using the Survey service (LimeSurvey
software). Survey service will be hosted on a physically secured server,
operating in the Secure Socket Layer (SSL) mode and also all data will be
stored in the same server storage pools, protected with Access Control List
(ACL). None of the questions related to participants’ personal data (except
Country of Business) will be offered in the Survey. The Main Survey module
contains questions from the Main Survey and the Additional Survey. Survey raw
data will contain the IP address of the participant only during single Survey
session. If the specific question (that belongs to the Main Survey) does not
fulfil the required answer scheme, an additional question that belongs to the
Additional Survey will be activated (by predefined logical schema, stored in
the Main Survey Module). In this way, both surveys could be completed within
the same Survey session. Just upon the Survey session end, the participant’s
IP address and other session data will be erased from the participant record,
in order to avoid any personal metadata collection. The IP address will be
erased automatically from the raw record by the customized script.
The **webinar** service in WP3 will also be hosted on the physically secured
server, operating in the Secure Socket Layer (SSL) mode. All data will be
stored in the same server storage pools, protected with Access Control List
(ACL), as well. During webinar realization, the so called “recording mode”
will be disabled at both the server and the client side. Also, none of the
webinar session data will be permanently stored on the hosting server. All
personal documents, personal session data and the server log data will be
erased just upon the end of the webinar by activating the log data erase
script: this searches for any log and session data acquired during the webinar
realization and erases it in all related log files.
Both webinar and survey processes will be conducted without the usage of the 3
rd party software and services: all data traffic will be based on direct
client-server connection without intermediate rerouting to 3 rd party
servers (e.g. outer cloud services, etc.).
As regards User Access Control Hierarchy related to the specific user research
and operational role, the considered access control include several
operational levels:
* Visual access to research data (read-only user access type), on the server by using the provided secured computer with authentication, authorization and auditing capabilities (AAA). Usage of devices for data recording by optical devices (i.e. cameras), physical media (USB flash drive, memory card, optical medium, hard disk, etc.), as well as the possibility of sending data over the intranet or external (internet) connection is strictly prohibited (in technical and organizational way);
* Visual access to research data and possibility to change (read-write user access type), on the server by using the provided secured computer with authentication, authorization and auditing capabilities (AAA). Usage of devices for data recording by optical devices (i.e. cameras), physical media (USB flash drive, memory card, optical medium, hard disk, etc.), as well as the possibility of sending data over the intranet or external (internet) connection is strictly prohibited (in technical and organizational way);
* Visual access to research data (read-only user access type), on the server by using the provided secured computer with authentication, authorization and auditing capabilities (AAA). Usage of devices for data recording by optical devices (i.e. cameras), physical media (USB flash drive, memory card, optical medium, hard disk, etc.), is allowed, but the possibility of sending data over the intranet or external (internet) connection is forbidden. The process is allowed and monitored by the Project data security officer whose task is to register all devices and storage media used for data copy in order to apply erase/destroy-data procedures upon data manipulation;
* Visual access to research data and possibility to change (read-write user access type), on the server by using the provided secured computer with authentication, authorization and auditing capabilities (AAA). Usage of devices for data recording by optical devices (i.e. cameras), physical media (USB flash drive, memory card, optical medium, hard disk, etc.), is allowed, but the possibility of sending data over the intranet or external (internet) connection is forbidden. The process is allowed and monitored by the Project data security officer whose task is to register all devices and storage media used for data copy in order to apply erase/destroy-data procedures upon data manipulation;
* Data manipulation on computers / other devices used as destination in copy procedures mentioned above. Data manipulation is allowed and monitored by the Project data security officer whose task is to apply erase/destroy-data procedures on destination device upon data manipulation;
* Data transmission between research/operational personnel using the secured/encrypted intranet and Internet connections hosted by the devices/servers with authentication, authorization and auditing capabilities (AAA).
Respecting the duty to secure data confidentiality according to the INTEND
Data Management Plan, each INTEND partner will delegate the Project data
security officer role to one team member (by the end of M4, January 2018).
Considering User Access Control Hierarchy, the responsibility of each Project
data security officer is to associate one of abovementioned operational levels
with each team member, regarding the specific research role within the
partner's team.
Survey and webinar servers are physically secured in rooms where access is
allowed only to authorized personnel. All processes related to survey and
webinar are exclusively hosted by these servers strictly excluding the use of
any external (3 rd party) services and their servers. Complete communication
is SSL encrypted, using digital certificates issued by relevant CA (Certified
Authority) institutions.
As regards WP4 **qualitative expert interviews** , all the input data will be
generated either digitally (audiofiles) or written as flow texts. Using the
license-based qualitative evaluation software MAXQDA, raw data will be
transcribed and afterwards stored on internal server. To guarantee protection
of personal and sensitive data throughout the whole evaluation process, the
software will be installed on a limited number of computers only, which are
personalized through password-protection. For further scientific evaluations
and with regard to the dissemination material, all information that could
possibly allow inference on the interviewee will be disguised or entirely
removed from the transcripts.
All the input data produced within WP4 **online survey** will be generated by
using the online survey software Unipark. With this software tool, the
customer is solely responsible for who will participate in the survey, how it
is made available to participants and what data finally will be collected. Any
information stored on the provider’s server is treated confidentially
according to GDPR (General Data Protection Regulation of the EU) and access to
data is only possible for authorized personnel. For customers, located in the
European Economic Area (EEA), all processing of Personal Data is – according
to the provider – performed in accordance with privacy rights and regulations
following the EU Directive 95/46/EC of the European Parliament and of the
Council of 24 th October 1995 (the Directive), and the implementations of
the Directive in local legislation.
All data generated in WP4 will be held securely on password-protected
computers and server systems, which are periodically maintained by the IT-
division of ZHAW. The computers as well as the server systems are kept in
office rooms, where access is allowed to authorized personnel only. Whenever
the computers are moved to another location or used externally, the owner of
the computer has to follow the strict guidelines and safety regulations of
ZHAW concerning the handling of internal devices with sensitive data abroad.
# Open access to scientific publications
The Project Partners who want to publish peer reviewed scientific papers that
contain data and results from the project, or promote the project, will have
to ensure that any effort will be made for the papers to be made available as
“open access” if possible.
Most of the papers will be published in conferences which is a quick process
with a great outreach, especially considering the short duration of the
project. All articles will have to contain the project’s acronym, reference to
the words EU and Horizon 2020 ensuring the promotion of the funding scheme and
the identification and accessibility of the work in the future.
The proceedings of the conferences will be freely available. Moreover, the
papers will be published on the project website and on Zenodo, as well as the
public deliverables, and will be made accessible to all to ensure that the
research community has long-term access to the INTEND data and results.
Zenodo enables researchers to share and preserve any research outputs in any
size, any format and from any science, therefore it serves well the INTEND
purposes. The main reason to select it are:
* Datasets and documents will be retained for the lifetime of the repository
* The submitted datasets have a persistent and unique identifier, essential for sustainable citations
* It helps track how the data has been used by providing access and download statistics
* It offers clear terms and conditions that meets legal requirements (e.g. for data protection) and allow reuse without licensing conditions
* It is provided free of charge for educational and informational use.
CUE will be responsible for uploading the relevant documents to Zenodo and
also to the project’s website. All partners will be responsible for
disseminating the open datasets using social networks and partners’ media
channels.
# Intellectual Property Rights
As the INTEND project is a Coordination and Support Action, new knowledge
created will form a common ground to accumulate new knowledge. The management
of intellectual property will be on the focus of the dissemination manager and
the coordinator.
The dissemination manager, CERTH, possesses significant experience in IPR and
will make sure that the following rules will be applied:
* Pre-existing know-how will remain the property of the partner having brought it into the project.
* Pre-existing know how will be made available, by their owners, to the project participants on the need to know basis. Usage outside the project will be decided among the owners and the potential users on a case-by-case basis.
* Knowledge will remain the property of those the partner involved in its generation / production.
* Knowledge jointly generated (without possibility to identify the individual share of work) shall be the joint property of the partners concerned.
# Allocation of resources, data security and ethical aspects
The INTEND data will be stored in the documents repository, in the transport
projects database, on the project website and in the Transport Synopsis Tool
database. The costs will be covered by the budget. As regards the storage on
Zenodo and TRIMIS, these were not budgeted, however, there are no costs
associated with these repositories as they are free of charge. CUE will be
responsible to upload the relevant documents on Zenodo, while CERTH is
responsible of uploading the relevant documentation on TRIMIS.
Each project partner is responsible for a reliable data management related to
their work within the project. However, the Project Coordinator is responsible
for the overall data management of the project.
Each project partner is also responsible for the security and preservation of
their data. The project partners’ servers are regularly and continuously
backed-up, as well as the project website. Moreover, access to the documents
repository and the Synopsis Tool (both through the project website) will be
controlled by use of user name and password. In the event of an incident, the
data will be recovered according to the necessary procedures of the data
owner.
As regards ethical aspects, Work Package 6 is entirely dedicated to specify
all relevant ethical aspects related to the involvement of people to the
project activities. Two deliverables will be submitted: D6.1 “H – Requirement
No. 1” and D6.2 “H – Requirement No. 2”. Specifically, D6.1 includes
information on the procedures used to identify and recruit research
participants to be involved in the diverse project activities (e.g. webinar,
surveys and interviews) and on the consent procedures for the participation of
humans, comprising a template of the informed consent form to be filled in by
the participants before taking part in any project activity. D6.2 provides
details about the procedures that will be implemented for data collection,
storage, protection, retention, destruction and confirmation in order to
protect individuals’ privacy.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1344_REACH_769827.md
|
## **1\. EXECUTIVE SUMMARY**
The REACH project places a high level of importance on the management of data
and for this reason has dedicated an entire work package to data collection
and measurements (WP6). The consortium strongly believes in the concepts of
open science and in the benefits that the European innovation ecosystem and
economy can draw from allowing re-using data at a larger scale. Despite this,
while carrying out pilot activities, individual partners will manage personal
information and will work with local groups, potentially collecting sensitive
data, and will need to consider privacy and ethical issues. A balance between
these two principles will guide REACH project’s data management policy.
This document is the first deliverable of WP6 - _Data Collection and
Measurement_ \- and represents the first version of REACH Data Management Plan
(DMP). It describes what data the project will generate, how they will be
produced, collected and used in all of its relevant subsections/categories. It
also aims to detail how the data related to the REACH project will be
disseminated, shared and preserved. Together with other documents, this
deliverable represents the REACH consortium’s work to create user-friendly
instruments that allow project partners to deal more efficiently with data and
ethics.
Information provided during pilots, local encounters or collected during
surveys involves personal data protection and management. The recruitment
process to be followed by the consortium for the engagement of stakeholders,
to provide information and/or comment on aspects of Cultural Heritage (CH),
digital technologies and participatory approaches, will be transparent and
such criteria is included and explained in the Participant Information Sheet
and Informed Consent Form (Appendix 2). The researcher managing the local
encounters will discuss personal data related issues with attendees, to agree
the required level of protection. This might, for example, include agreement
on anonymity, pseudonymisation, restrictions in the dissemination of data and
appropriate editorial decision-making processes. Stakeholders will be informed
of the purpose of the interview and that their decision to participate is made
freely and with the right to withdraw.
For some of the activities to be carried out by the project, it will be
necessary to collect basic personal data (e.g. full name, contact details,
background, opinions). Such data will be protected in compliance with the
General Data Protection Regulation (GDPR) (briefly summarised in Appendix 3).
The DMP currently lists 16 datasets, mainly relating to the four pilots
carried out by ELTE, SPK, UGR and CUNI. These datasets can be divided into
three main groups: 4 outcomes from local encounters, 4 recordings of one-to-
one and focus groups conversations and 4 contact lists of people involved in
these activities. Additional datasets relate to attendees taking part in
participatory activities, the database of good practices in CH, informed
consent forms and REACH’s newsletter subscribers.
Due to the nature of the REACH project, as a Coordination and Support Action
that will bring together and collect personal and sensitive data from a wide-
ranging network of subjects involved in cultural heritage – local
administrations, cultural heritage professionals, academic experts, arts
practitioners, professionals in archives and galleries, associations, interest
groups and policy-makers - most datasets will not be published online or by
any other means.
## **2\. INTRODUCTION**
This document provides an overview of the considerations that partners should
make in the collection, use and storage of data, as well as outlines
information and characteristics of the datasets that are likely to be produced
during the three years of implementation of the REACH project.
As the first version of the project Data Management Plan (DMP), D6.1
represents an initial appraisal on the subject and all the information
provided in this document will be revised in the next iteration. It is
considered to be a living document that will evolve as the REACH project
progresses. The DMP will be reissued annually (in Month 18 and 30) and
submitted to the Commission as part of the Periodic Report at the halfway
stage and end of the REACH project.
Considering that REACH is a Coordination and Support action and not a research
project, the number of original datasets that will be produced is limited and
concentrated on the 4 pilots on Institutional, Minority, Rural and Small Towns
Heritage, as well as at project events.
### 2.1 BACKGROUND
DMP is a mandatory document that is part of the Open Research Data Pilot (ORD)
1 that REACH complies with. The ORD pilot aims to improve and maximise access
to and re-use of research data generated by Horizon 2020 projects and takes
into account the need to balance openness and protection of scientific
information, commercialisation and Intellectual Property Rights (IPR), privacy
concerns, security as well as data management and preservation issues.
The Grant Agreement governs project activity (particularly relevant to D6.1
are Articles 18, 23a, 24, 25, 26, 27, 30, 31, 36, 39 and 52 and Annex I –
Description of Action) that relate to:
* type of data;
* storage;
* recruitment process;
* confidentiality;
* ownership;
* management of intellectual property; access.
The procedures that will be implemented for data collection, storage, access,
sharing policies, protection, retention and destruction will be according to
the requirements of the national legislation of each partner and in line with
the EU standards.
The REACH project consortium will comply with the European Charter for
Researchers and the Code of Conduct for the Recruitment of Researchers
described in Article 32, as well as the European Code of Conduct for Research
Integrity that is detailed in Article 34 of the Grant Agreement.
In accordance with Grant Agreement Article 17, data must be made available
upon request, or in the context of checks, reviews, audits or investigations.
All REACH partners must keep any data, documents or other material
confidential during the implementation for the project and for four years
after the end of the project, as per Article 36 of the Grant Agreement.
### 2.2 ROLE OF THIS DELIVERABLE IN THE PROJECT
This DMP describes the data management life cycle for the data to be
collected, processed and/or generated during the REACH project. The delivery
of this DMP coincides with the achievement of REACH’s milestone MS13.
It includes information on the handling of research data during and after the
end of the project, what data will be collected and/or generated, which
methodology, formats and standards will be applied, whether data will be
shared and how data will be curated and preserved, during and after the end of
the project. It reflects the ways that individual partners will handle data,
and will be updated accordingly to how their practices in this regard may
evolve during the project. Together with other documents, this deliverable
represents the REACH Consortium’s work to create user-friendly instruments
that allow project partners dealing more efficiently with data and ethics.
### 2.3 APPROACH
The University of Granada (UGR) is the partner responsible for this
deliverable, but involved all project partners in identifying potential
datasets to be included and described here. The initial approach involved
preparing a table of 2 columns and 21 rows, organised in 5 sections, to
facilitate the identification and description of those datasets that have been
distributed to the partners in month 3.
To define the general structure of the deliverable, the Digital Curation
Centre DMP online tool 2 has been used, along with the Guidelines on FAIR
Data Management in Horizon 2020 (Version 3.0) 3 . The FAIR data principles
require that data should be Findable, Accessible, Interoperable and Reusable
(FAIR). They were generated to improve the practices for data management and
data-curation and can be applied to a wide range of data management
purposes/projects and will be considered in this and subsequent versions of
the REACH DMP. Examples of several DMPs from others H2020 projects have also
been consulted, as have COVUNI data guide documents relating to the new GDPR
and to ethical considerations.
### 2.4 STRUCTURE OF THE DOCUMENT
Following an executive summary and general introduction covering transversal
issues, this document has two main sections. The first considers processes for
data collection, protection, sharing, preservation and re-use, with the second
outlining several datasets that will be produced during the project, with
individual tables providing further detail. The content of the document will
be summarised in the concluding section.
## **3\. DATA MANAGEMENT PLAN**
**3.1 DATA MANAGEMENT POLICY**
The REACH project places a high level of importance on the management of data
and for this reason has dedicated an entire work package to data collection
and measurements (WP6). The consortium strongly believes in the concepts of
open science and in the benefits that the European innovation ecosystem and
economy can draw from allowing reusing data at a larger scale. Despite this,
while carrying out pilot activities, individual partners will manage personal
information and will work with local groups, potentially collecting sensitive
data, and will need to consider privacy and ethical issues. A balance between
these two principles will guide REACH project’s data management policy.
### 3.2 DATA COLLECTION PROCESS
It is recognised that each partner will need to follow institutional and
nation guidelines, however, the Coordinating partner offers guidance that
might be helpful for partners carrying activities that imply data collection
in the framework of the four pilots.
It is good practice to use a dedicated device to record interviews. Following
completion of the interview, data should be copied from the device onto a
secure server at the first opportunity and then deleted from the device; it
should not be stored on an unencrypted laptop. Data should never be
transmitted using public or unsecured Wi-Fi connections.
When an online survey is needed, a careful selection of platform should be
made especially where data could be saved in a Cloud. Ideally, the platform
chosen would hold data within Europe rather than the United States (and
preferably the European Union). Similarly, some institutions do not allow data
storage/file sharing outside of the EU with tools such as Dropbox. REACH
project partners are able to use the COVUNI SharePoint site for this purpose.
In the case of paper based surveys, all paperwork containing participant data
must be locked in a secure cabinet until it is processed. Once processing is
completed and evidence is no longer required, all documents should be
shredded. This should only be done where it fits with the guidance of the
Grant Agreement for the retention of project data.
Three types of data may be collected, each of which has associated
requirements:
* for primary data created e.g. from interviews and focus groups, it is necessary to describe the data collected, including details of any audio and/or visual recording or transcripts created;
* in the case of secondary data, a description of file types, archives and datasets researched should be compiled together with numerical and descriptive results;
* where quantitative data includes the use of spreadsheets to measure survey results, metadata should be provided to describe the description of participants involved, the survey design methodology and sampling criteria.
### 3.3 PERSONAL DATA PROTECTION
Information provided during pilots, local encounters or collected during
surveys involves personal data protection and management and, as stated in the
Grant Agreement (Annex 1, Part B, 1.3.4), “ _protection of the privacy of
network and pilot participants will be taken into account while planning
participatory activities_ ”.
The recruitment process to be followed by the consortium for the engagement of
stakeholders, to provide information and/or comment on aspects of Cultural
Heritage (CH), digital technologies and participatory approaches, will be
transparent and such criteria is included and explained in the Participant
Information Sheet and Informed Consent Form (Appendix 2). Stakeholders will be
informed of the purpose of the interview and that their decision to
participate is made freely and with the right to withdraw.
These documents have been approved by the Coventry University Research Ethics
Committee and REACH project partners have translated them into the native
languages of participants. It has been agreed that given some of the
vulnerable groups that the REACH project will work with, approaches will be
made through NGOs, as appropriate. If, for any reason, the participant cannot
sign the Consent Form, an audio recording of consent or a signature of a
thirdparty witness would be considered acceptable. Further consent would be
needed if any part an interview is to be used externally in either audio or
video format or if photographs of the subject are to be used.
For some of the activities to be carried out by the project, it will be
necessary to collect basic personal data (e.g. full name, contact details,
background, opinions). Such data will be protected in compliance with the
General Data Protection Regulation (GDPR) (Regulation (EU) 2016/679), that
will come into effect in May 2018 replacing the EU's Data Protection Directive
95/46/EC. In particular, will be taken into account from the GDPR those
conditions set out on collecting, using and further processing of personal
data for research purposes (Appendix 3).
It is also possible that in the course of carrying out activities, project
partners will come into possession of personal sensitive data (or special
categories of personal data under the GDPR). Special categories of personal
data are defined as “ _personal data revealing racial or ethnic origin,
political opinions, religious or philosophical beliefs, or trade union
membership, and the processing of genetic data, biometric data for the purpose
of uniquely identifying a natural person, data concerning health or data
concerning a natural person’s sex life or sexual orientation”._ These data are
subject to much stricter requirements and, although GDPR allows personal data
used for research to be stored for longer periods of time, REACH partners are
required not to keep personal sensitive data for longer than is necessary.
The researcher managing the local encounters will discuss personal data
related issues with attendees, to agree the required level of protection. This
might for example include agreement on anonymity, pseudonymisation,
restrictions in the dissemination of data and appropriate editorial decision-
making processes. All personal data collection will be carried out after
giving data subjects full details on how the information provided will be
used, and after obtaining signed informed consent forms.
### 3.4 IPR MANAGEMENT
Ordinarily, partners will retain copyright and intellectual property rights
for data collected and other associate outputs. Should third parties take
photographs of project events, copyright is held by them and permission is
needed from them for the images to be included within REACH project reports
and media. National legislation, such as Freedom of Panorama, may also affect
the holder of rights.
During the project, partners are likely to share (encrypted or password
protected) data (possibly via SharePoint), although they would retain its
ownership. All parties in receipt are responsible for the appropriate control
of data.
Partners may also produce shared outputs. In cases where significant
intellectual or practical contributions are made, attribution of authorship is
awarded accordingly; for smaller contributions, acknowledgement should be
given. As stated by the Grant Agreement (Annex 1, Part B, 2.2.4) “ _Ownership
of the written texts of all REACH publications (e.g. reports, promotional
texts) lies with the authors or the group of authors, who will be mentioned in
the documents. REACH will have the right to publish and disseminate finalised
documents in all media deemed appropriate to secure the highest possible
impact. […] Speakers at events will be asked to approve the publication of
their slides on the project website and/or video publication/webstreaming of
their talk_ .”
### 3.5 DATA SHARING
UGR, the partner responsible for drafting the DMP along with all work package
leaders, will define how data will be shared, the access procedures, the
embargo periods, the necessary software and other tools for preservation and
to enable re-use. In case of datasets that cannot be shared, the reasons are
mentioned in the corresponding tables in section 4.3 (e.g. ethical, rules of
personal data, intellectual property, commercial, privacy-related and
security-related).
As part of the project’s internal reporting process, partners are able to
share photographs taken and quotations given by participants with each other
that relate to project activity. At the time of uploading them to the secure
COVUNI SharePoint site, details of informed consent are shared, attribution
provided and details of any availability or restriction to future re-use are
provided. Where appropriate, the photographs and/or quotations will be used in
project reports and, if permissible, maybe on the website and in project
materials.
### 3.6 ARCHIVING AND PRESERVATION
Each dataset will usually be archived and preserved by the secure IT
infrastructure of the partner that is responsible for its collection and
processing. Exception to this general rule are dataset #14 - _Database of
participatory activities good practices in CH_ \- that will be hosted at
_https://open-heritage.eu/_ and served under an encrypted connection, dataset
#15 - _Data from participation in workshops and pilots_ \- that will be
centrally stored by UGR, and dataset #16 - _Informed consent forms_ \- that
will be kept in paper format.
Participants will have 28 days to withdraw from the study and request that
their information is deleted. After that, data will be formally incorporated
in the study archives. All data repositories used by the project will include
a secure protection of personal sensitive data. Partners will be asked, during
and at the end of the project, about the status of recorded data and reminded
of the need to delete or destroy data that is no longer required.
### 4\. RESULTS
#### 4.1 DESCRIPTION
The list of identified datasets that will be collected, processed and/or
generated by the REACH project is provided below, while the type and details
for each dataset are given in the subsequent section #4.3.
This list is indicative and represents a first appraisal of the data that the
REACH project will produce – it may be adapted (addition/removal of datasets)
in the next versions of the DMP, to take into consideration the project
developments.
<table>
<tr>
<th>
**#**
</th>
<th>
**Dataset Name**
</th>
<th>
**Reference partner**
</th> </tr>
<tr>
<td>
1
</td>
<td>
Outcomes from local encounters on minority heritage
</td>
<td>
ELTE
</td> </tr>
<tr>
<td>
2
</td>
<td>
Recordings of local encounters on minority heritage
</td>
<td>
ELTE
</td> </tr>
<tr>
<td>
3
</td>
<td>
Outcomes from local encounters on institutional heritage
</td>
<td>
SPK
</td> </tr>
<tr>
<td>
4
</td>
<td>
Recordings of local encounters on institutional heritage
</td>
<td>
SPK
</td> </tr>
<tr>
<td>
5
</td>
<td>
Outcomes from local encounters on rural heritage
</td>
<td>
UGR
</td> </tr>
<tr>
<td>
6
</td>
<td>
Recordings of local encounters on rural heritage
</td>
<td>
UGR
</td> </tr>
<tr>
<td>
7
</td>
<td>
Outcomes from local encounters on small towns’ heritage
</td>
<td>
CUNI
</td> </tr>
<tr>
<td>
8
</td>
<td>
Recordings of local encounters on small towns’ heritage
</td>
<td>
CUNI
</td> </tr>
<tr>
<td>
9
</td>
<td>
Contact list on minority heritage
</td>
<td>
ELTE
</td> </tr>
<tr>
<td>
10
</td>
<td>
Contact list on institutional heritage
</td>
<td>
SPK
</td> </tr>
<tr>
<td>
11
</td>
<td>
Contact list on rural heritage
</td>
<td>
UGR
</td> </tr>
<tr>
<td>
12
</td>
<td>
Contact list on small towns’ heritage
</td>
<td>
CUNI
</td> </tr>
<tr>
<td>
13
</td>
<td>
REACH Newsletter Subscribers
</td>
<td>
PROMOTER
</td> </tr>
<tr>
<td>
14
</td>
<td>
Database of participatory activities good practices in CH
</td>
<td>
UGR
</td> </tr>
<tr>
<td>
15
</td>
<td>
Data from participation in workshops and pilots
</td>
<td>
UGR
</td> </tr>
<tr>
<td>
16
</td>
<td>
Informed consent forms
</td>
<td>
COVUNI
</td> </tr> </table>
#### 4.2 TEMPLATE STRUCTURE AND FIELDS DESCRIPTIONS
Each dataset that will be collected, processed or generated within the project
will have metadata describing its characteristics. Each table will provide
information structured in several sections, with corresponding fields,
regarding:
* data identification;
* project implementation;
* technical details;
* data exploitation and sharing; archiving and preservation.
<table>
<tr>
<th>
**#:** _sequential number_
</th>
<th>
**Dataset name:**
</th>
<th>
</th> </tr>
<tr>
<td>
**Data identification**
</td>
<td>
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
_Short description of the dataset_
</td> </tr>
<tr>
<td>
Source
</td>
<td>
_How the dataset has been generated_
</td> </tr>
<tr>
<td>
Creation framework
</td>
<td>
_REACH and/or previous projects_
</td> </tr>
<tr>
<td>
**Project implementation**
</td>
<td>
</td> </tr>
<tr>
<td>
Partners involved
</td>
<td>
_List of all partners involved_
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
_List of related tasks and WPs_
</td> </tr>
<tr>
<td>
Delivery date
</td>
<td>
_Date of dataset publishing/data collection ending_
</td> </tr>
<tr>
<td>
**Technical details**
</td>
<td>
</td> </tr>
<tr>
<td>
Standard and metadata used (if applicable)
</td>
<td>
_ex. ISO 19139, Dublin Core, RDF, ..._
</td> </tr>
<tr>
<td>
Dataset type
</td>
<td>
_Text, audio, video, database, ..._
</td> </tr>
<tr>
<td>
Native and interoperable format(s)
</td>
<td>
_According to dataset type_
</td> </tr>
<tr>
<td>
Estimated number of records or data volume
</td>
<td>
_Approximate end volume in records, length or MB_
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td>
<td>
</td> </tr>
<tr>
<td>
Purpose and use of the dataset
</td>
<td>
_Within the REACH project_
</td> </tr>
<tr>
<td>
Linked outputs/products
</td>
<td>
_Academic publications, deliverables, …_
</td> </tr>
<tr>
<td>
Intellectual property rights
</td>
<td>
_Internal, confidential or public._
_CC License and Open Access_
</td> </tr>
<tr>
<td>
Distribution and sharing channels
</td>
<td>
_Publications, download, embed, ..._
</td> </tr>
<tr>
<td>
Sensible information
</td>
<td>
_Personal data, informed consent, long term preservation_
</td> </tr>
<tr>
<td>
Language
</td>
<td>
_In which language data will been recorded_
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
</td> </tr>
<tr>
<td>
Data storage
</td>
<td>
_Where? For how long?_
</td> </tr>
<tr>
<td>
Backup policy
</td>
<td>
_How many copies? How often?_
</td> </tr> </table>
#### 4.3 DATASETS
<table>
<tr>
<th>
**#:** 1
</th>
<th>
**Dataset name:** Outcomes from local encounters on minority heritage
</th> </tr>
<tr>
<td>
**Data identification**
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
Opinions, comments and facts collected in a structured way
</td> </tr>
<tr>
<td>
Source
</td>
<td>
Meetings and workshops with local stakeholders held in Hungary
</td> </tr>
<tr>
<td>
Creation framework
</td>
<td>
REACH project - WP5 - Participatory Pilots
</td> </tr>
<tr>
<td>
**Project implementation**
</td> </tr>
<tr>
<td>
Partners involved
</td>
<td>
ELTE
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP5: T5.2
</td> </tr>
<tr>
<td>
Delivery date
</td>
<td>
January 2020
</td> </tr>
<tr>
<td>
**Technical details**
</td> </tr>
<tr>
<td>
Dataset type
</td>
<td>
Alphanumeric data
</td> </tr>
<tr>
<td>
Native (interoperable formats)
</td>
<td>
docx (doc, rtf, odt)
</td> </tr>
<tr>
<td>
Estimated data volume
</td>
<td>
6000 words
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Purpose and use of the dataset
</td>
<td>
Working papers and good practice guides, lessons learnt, participatory
approaches.
</td> </tr>
<tr>
<td>
Linked outputs/products
</td>
<td>
This dataset will be primarily used to produce the D5.2 deliverable and
possibly for academic publications.
</td> </tr>
<tr>
<td>
Intellectual property rights
</td>
<td>
Linked outputs will be covered by a Creative Commons At-
tribution-Non-commercial 4.0 License
(http://creativecommons.org/licenses/by-nc/4.0
</td> </tr>
<tr>
<td>
Distribution and sharing channels
</td>
<td>
This dataset is confidential, will only be used for research purposes and will
not be published online or by any other means.
</td> </tr>
<tr>
<td>
Sensible information
</td>
<td>
Personal data, with consent gained from data subjects using individuals
informed consent forms. In linked outputs, sensible data will be anonymised or
pseudonymised.
</td> </tr>
<tr>
<td>
Language(s)
</td>
<td>
Hungarian, English
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td> </tr>
<tr>
<td>
Data storage
</td>
<td>
The dataset will be preserved by ELTE IT infrastructure
</td> </tr>
<tr>
<td>
Backup policy
</td>
<td>
2 onsite and 1 offsite copies in sync
</td> </tr> </table>
<table>
<tr>
<th>
**#:** 2
</th>
<th>
**Dataset name:** Recordings of local encounters on minority heritage
</th> </tr>
<tr>
<td>
**Data identification**
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
Recording of one-to-one and focus groups conversations
</td> </tr>
<tr>
<td>
Source
</td>
<td>
Meetings and workshops with local stakeholders held in Hungary
</td> </tr>
<tr>
<td>
Creation framework
</td>
<td>
REACH project - WP5 – Participatory Pilots
</td> </tr>
<tr>
<td>
**Project implementation**
</td> </tr>
<tr>
<td>
Partners involved
</td>
<td>
ELTE
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP5: T5.2
</td> </tr>
<tr>
<td>
Delivery date (estimated)
</td>
<td>
January 2020
</td> </tr>
<tr>
<td>
**Technical details**
</td> </tr>
<tr>
<td>
Dataset type
</td>
<td>
Video/audio recordings
</td> </tr>
<tr>
<td>
Native (interoperable formats)
</td>
<td>
mp4 (ogg, mkv) - mp3 (ogg)
</td> </tr>
<tr>
<td>
Estimated data volume
</td>
<td>
30GB in mp4 – 2.8GB in mp3
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Purpose and use of the dataset
</td>
<td>
The full recordings of the local encounters are indented as a backup option of
the outcomes recorded in textual format.
</td> </tr>
<tr>
<td>
Linked outputs/products
</td>
<td>
This dataset will be primarily used to produce the D5.2 deliverable and
possibly for academic publications.
</td> </tr>
<tr>
<td>
Intellectual property rights
</td>
<td>
Linked outputs will be covered by a Creative Commons At-
tribution-Non-commercial 4.0 License
(http://creativecommons.org/licenses/by-nc/4.0
</td> </tr>
<tr>
<td>
Distribution and sharing channels
</td>
<td>
This dataset is confidential, will only be used for research purposes and will
not be published online or by any other means.
</td> </tr>
<tr>
<td>
Sensible information
</td>
<td>
Personal data, with consent gained from data subjects using individuals
informed consent forms. In linked outputs, sensible data will be anonymised or
pseudonymised.
</td> </tr>
<tr>
<td>
Language(s)
</td>
<td>
Hungarian, English
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td> </tr>
<tr>
<td>
Data storage
</td>
<td>
The dataset will be preserved by ELTE IT infrastructure
</td> </tr>
<tr>
<td>
Backup policy
</td>
<td>
2 onsite and 1 offsite copies in sync
</td> </tr> </table>
<table>
<tr>
<th>
**#:** 3
</th>
<th>
**Dataset name:** Outcomes from local encounters on institutional heritage
</th> </tr>
<tr>
<td>
**Data identification**
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
Opinions, comments and facts collected in a structured way
</td> </tr>
<tr>
<td>
Source
</td>
<td>
Meetings and workshops with staff members of CH institutions
</td> </tr>
<tr>
<td>
Creation framework
</td>
<td>
REACH project - WP5 - Participatory Pilots
</td> </tr>
<tr>
<td>
**Project implementation**
</td> </tr>
<tr>
<td>
Partners involved
</td>
<td>
SPK
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP5: T5.3
</td> </tr>
<tr>
<td>
Delivery date
</td>
<td>
January 2020
</td> </tr>
<tr>
<td>
**Technical details**
</td> </tr>
<tr>
<td>
Dataset type
</td>
<td>
Alphanumeric data
</td> </tr>
<tr>
<td>
Native (interoperable formats)
</td>
<td>
docx (doc, rtf, odt)
</td> </tr>
<tr>
<td>
Estimated data volume
</td>
<td>
6000 words
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Purpose and use of the dataset
</td>
<td>
Analyse, on a comparative basis, potential and needs of different types of CH
institutions to widen their participatory approach. Collect good practice
examples.
</td> </tr>
<tr>
<td>
Linked outputs/products
</td>
<td>
This dataset will be primarily used to produce the D5.3 deliverable and
possibly for academic publications.
</td> </tr>
<tr>
<td>
Intellectual property rights
</td>
<td>
Linked outputs will be covered by a Creative Commons At-
tribution-Non-commercial 4.0 License
(http://creativecommons.org/licenses/by-nc/4.0
</td> </tr>
<tr>
<td>
Distribution and sharing channels
</td>
<td>
This dataset is confidential, will only be used for research purposes and will
not be published online or by any other means.
</td> </tr>
<tr>
<td>
Sensible information
</td>
<td>
Personal data, with consent gained from data subjects using individuals
informed consent forms. In linked outputs, sensible data will be anonymised or
pseudonymised.
</td> </tr>
<tr>
<td>
Language(s)
</td>
<td>
German, English
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td> </tr>
<tr>
<td>
Data storage
</td>
<td>
The dataset will be preserved by SPK IT infrastructure
</td> </tr>
<tr>
<td>
Backup policy
</td>
<td>
2 onsite and 1 offsite copies in sync
</td> </tr> </table>
<table>
<tr>
<th>
**#:** 4
</th>
<th>
**Dataset name:** Recordings of local encounters on institutional heritage
</th> </tr>
<tr>
<td>
**Data identification**
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
Recording of one-to-one and focus groups conversations
</td> </tr>
<tr>
<td>
Source
</td>
<td>
Meetings and workshops with staff members of CH institutions
</td> </tr>
<tr>
<td>
Creation framework
</td>
<td>
REACH project - WP5 - Participatory Pilots
</td> </tr>
<tr>
<td>
**Project implementation**
</td> </tr>
<tr>
<td>
Partners involved
</td>
<td>
SPK
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP5: T5.3
</td> </tr>
<tr>
<td>
Delivery date (estimated)
</td>
<td>
January 2020
</td> </tr>
<tr>
<td>
**Technical details**
</td> </tr>
<tr>
<td>
Dataset type
</td>
<td>
Video/audio recordings
</td> </tr>
<tr>
<td>
Native (interoperable formats)
</td>
<td>
mp4 (ogg, mkv) - mp3 (ogg)
</td> </tr>
<tr>
<td>
Estimated data volume
</td>
<td>
30GB in mp4 – 2.8GB in mp3
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Purpose and use of the dataset
</td>
<td>
The full recordings of the encounters with CH institutions are indented as a
backup option of the outcomes recorded in textual format.
</td> </tr>
<tr>
<td>
Linked outputs/products
</td>
<td>
This dataset will be primarily used to produce the D5.3 deliverable and
possibly for academic publications.
</td> </tr>
<tr>
<td>
Intellectual property rights
</td>
<td>
Linked outputs will be covered by a Creative Commons At-
tribution-Non-commercial 4.0 License
(http://creativecommons.org/licenses/by-nc/4.0
</td> </tr>
<tr>
<td>
Distribution and sharing channels
</td>
<td>
This dataset is confidential, will only be used for research purposes and will
not be published online or by any other means.
</td> </tr>
<tr>
<td>
Sensible information
</td>
<td>
Personal data, with consent gained from data subjects using individuals
informed consent forms. In linked outputs, sensible data will be anonymised or
pseudonymised.
</td> </tr>
<tr>
<td>
Language(s)
</td>
<td>
German, English
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td> </tr>
<tr>
<td>
Data storage
</td>
<td>
The dataset will be preserved by SPK IT infrastructure
</td> </tr>
<tr>
<td>
Backup policy
</td>
<td>
2 onsite and 1 offsite copies in sync
</td> </tr> </table>
<table>
<tr>
<th>
**#:** 5
</th>
<th>
**Dataset name:** Outcomes from local encounters on rural heritage
</th> </tr>
<tr>
<td>
**Data identification**
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
Opinions, comments and facts collected in a structured way
</td> </tr>
<tr>
<td>
Source
</td>
<td>
Meetings and workshops with local stakeholders held in Spain
</td> </tr>
<tr>
<td>
Creation framework
</td>
<td>
REACH project - WP5 - Participatory Pilots
</td> </tr>
<tr>
<td>
**Project implementation**
</td> </tr>
<tr>
<td>
Partners involved
</td>
<td>
UGR
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP5: T5.4
</td> </tr>
<tr>
<td>
Delivery date
</td>
<td>
January 2020
</td> </tr>
<tr>
<td>
**Technical details**
</td> </tr>
<tr>
<td>
Dataset type
</td>
<td>
Alphanumeric data
</td> </tr>
<tr>
<td>
Native (interoperable formats)
</td>
<td>
docx (doc, rtf, odt)
</td> </tr>
<tr>
<td>
Estimated data volume
</td>
<td>
6000 words
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Purpose and use of the dataset
</td>
<td>
Promote co-governance and territorial safe-keeping to protect agrarian
heritage (tangible and intangible) and rural landscapes.
</td> </tr>
<tr>
<td>
Linked outputs/products
</td>
<td>
This dataset will be primarily used to produce the D5.4 deliverable and
possibly for academic publications.
</td> </tr>
<tr>
<td>
Intellectual property rights
</td>
<td>
Linked outputs will be covered by a Creative Commons At-
tribution-Non-commercial 4.0 License
(http://creativecommons.org/licenses/by-nc/4.0
</td> </tr>
<tr>
<td>
Distribution and sharing channels
</td>
<td>
This dataset is confidential, will only be used for research purposes and will
not be published online or by any other means.
</td> </tr>
<tr>
<td>
Sensible information
</td>
<td>
Personal data, with consent gained from data subjects using individuals
informed consent forms. In linked outputs, sensible data will be anonymised or
pseudonymised.
</td> </tr>
<tr>
<td>
Language(s)
</td>
<td>
Spanish
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td> </tr>
<tr>
<td>
Data storage
</td>
<td>
The dataset will be preserved by UGR IT infrastructure
</td> </tr>
<tr>
<td>
Backup policy
</td>
<td>
2 onsite and 1 offsite copies in sync
</td> </tr> </table>
<table>
<tr>
<th>
**#:** 6
</th>
<th>
**Dataset name:** Recordings of local encounters on rural heritage
</th> </tr>
<tr>
<td>
**Data identification**
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
Recording of one-to-one and focus groups conversations
</td> </tr>
<tr>
<td>
Source
</td>
<td>
Meetings and workshops with local stakeholders held in Spain
</td> </tr>
<tr>
<td>
Creation framework
</td>
<td>
REACH project - WP5
</td> </tr>
<tr>
<td>
**Project implementation**
</td> </tr>
<tr>
<td>
Partners involved
</td>
<td>
UGR
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP5: T5.4
</td> </tr>
<tr>
<td>
Delivery date (estimated)
</td>
<td>
January 2020
</td> </tr>
<tr>
<td>
**Technical details**
</td> </tr>
<tr>
<td>
Dataset type
</td>
<td>
Video/audio recordings
</td> </tr>
<tr>
<td>
Native (interoperable formats)
</td>
<td>
mp4 (ogg, mkv) - mp3 (ogg)
</td> </tr>
<tr>
<td>
Estimated data volume
</td>
<td>
30GB in mp4 – 2.8GB in mp3
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Purpose and use of the dataset
</td>
<td>
The full recordings of the local encounters are indented as a backup option of
the outcomes recorded in textual format.
</td> </tr>
<tr>
<td>
Linked outputs/products
</td>
<td>
This dataset will be primarily used to produce the D5.4 deliverable and
possibly for academic publications.
</td> </tr>
<tr>
<td>
Intellectual property rights
</td>
<td>
Linked outputs will be covered by a Creative Commons At-
tribution-Non-commercial 4.0 License
(http://creativecommons.org/licenses/by-nc/4.0
</td> </tr>
<tr>
<td>
Distribution and sharing channels
</td>
<td>
This dataset is confidential, will only be used for research purposes and will
not be published online or by any other means.
</td> </tr>
<tr>
<td>
Sensible information
</td>
<td>
Personal data, with consent gained from data subjects using individuals
informed consent forms. In linked outputs, sensible data will be anonymised or
pseudonymised.
</td> </tr>
<tr>
<td>
Language(s)
</td>
<td>
Spanish
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td> </tr>
<tr>
<td>
Data storage
</td>
<td>
The dataset will be preserved by UGR IT infrastructure
</td> </tr>
<tr>
<td>
Backup policy
</td>
<td>
2 onsite and 1 offsite copies in sync
</td> </tr> </table>
<table>
<tr>
<th>
**#:** 7
</th>
<th>
**Dataset name:** Outcomes from local encounters on small towns’ heritage
</th> </tr>
<tr>
<td>
**Data identification**
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
Opinions, comments and facts collected in a structured way
</td> </tr>
<tr>
<td>
Source
</td>
<td>
Meetings and workshops with associate partners held in Czech Republic and in
Italy
</td> </tr>
<tr>
<td>
Creation framework
</td>
<td>
REACH project - WP5 - Participatory Pilots
</td> </tr>
<tr>
<td>
**Project implementation**
</td> </tr>
<tr>
<td>
Partners involved
</td>
<td>
CUNI, MISE
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP5: T5.5
</td> </tr>
<tr>
<td>
Delivery date
</td>
<td>
January 2020
</td> </tr>
<tr>
<td>
**Technical details**
</td> </tr>
<tr>
<td>
Dataset type
</td>
<td>
Alphanumeric data
</td> </tr>
<tr>
<td>
Native (interoperable formats)
</td>
<td>
docx (doc, rtf, odt)
</td> </tr>
<tr>
<td>
Estimated data volume
</td>
<td>
6000 words
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Purpose and use of the dataset
</td>
<td>
Analysis of the representations of small towns’ heritage as displayed through
museums, local histories, pageants and festivals, heritage trails, the urban
space and alike.
</td> </tr>
<tr>
<td>
Linked outputs/products
</td>
<td>
This dataset will be primarily used to produce the D5.5 deliverable and
possibly for academic publications.
</td> </tr>
<tr>
<td>
Intellectual property rights
</td>
<td>
Linked outputs will be covered by a Creative Commons At-
tribution-Non-commercial 4.0 License
(http://creativecommons.org/licenses/by-nc/4.0
</td> </tr>
<tr>
<td>
Distribution and sharing channels
</td>
<td>
This dataset is confidential, will only be used for research purposes and will
not be published online or by any other means.
</td> </tr>
<tr>
<td>
Sensible information
</td>
<td>
Personal data, with consent gained from data subjects using individuals
informed consent forms. In linked outputs, sensible data will be anonymised or
pseudonymised.
</td> </tr>
<tr>
<td>
Language(s)
</td>
<td>
Czech, Slovak, Italian
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td> </tr>
<tr>
<td>
Data storage
</td>
<td>
The dataset will be preserved, respectively, by CUNI and MISE IT
infrastructure
</td> </tr>
<tr>
<td>
Backup policy
</td>
<td>
2 onsite and 1 offsite copies in sync
</td> </tr> </table>
<table>
<tr>
<th>
**#:** 8
</th>
<th>
**Dataset name:** Recordings of local encounters on small towns’ heritage
</th> </tr>
<tr>
<td>
**Data identification**
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
Recording of one-to-one and focus groups conversations
</td> </tr>
<tr>
<td>
Source
</td>
<td>
Meetings and workshops with associate partners held in Czech Republic and in
Italy
</td> </tr>
<tr>
<td>
Creation framework
</td>
<td>
REACH project - WP5
</td> </tr>
<tr>
<td>
**Project implementation**
</td> </tr>
<tr>
<td>
Partners involved
</td>
<td>
CUNI, MISE
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP5: T5.5
</td> </tr>
<tr>
<td>
Delivery date (estimated)
</td>
<td>
January 2020
</td> </tr>
<tr>
<td>
**Technical details**
</td> </tr>
<tr>
<td>
Dataset type
</td>
<td>
Video/audio recordings
</td> </tr>
<tr>
<td>
Native (interoperable formats)
</td>
<td>
mp4 (ogg, mkv) - mp3 (ogg)
</td> </tr>
<tr>
<td>
Estimated data volume
</td>
<td>
30GB in mp4 – 2.8GB in mp3
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Purpose and use of the dataset
</td>
<td>
The full recordings of the local encounters are indented as a backup option of
the outcomes recorded in textual format.
</td> </tr>
<tr>
<td>
Linked outputs/products
</td>
<td>
This dataset will be primarily used to produce the D5.5 deliverable and
possibly for academic publications.
</td> </tr>
<tr>
<td>
Intellectual property rights
</td>
<td>
Linked outputs will be covered by a Creative Commons At-
tribution-Non-commercial 4.0 License
(http://creativecommons.org/licenses/by-nc/4.0
</td> </tr>
<tr>
<td>
Distribution and sharing channels
</td>
<td>
This dataset is confidential, will only be used for research purposes and will
not be published online or by any other means.
</td> </tr>
<tr>
<td>
Sensible information
</td>
<td>
Personal data, with consent gained from data subjects using individuals
informed consent forms. In linked outputs, sensible data will be anonymised or
pseudonymised.
</td> </tr>
<tr>
<td>
Language(s)
</td>
<td>
Czech, Slovak, Italian
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td> </tr>
<tr>
<td>
Data storage
</td>
<td>
The dataset will be preserved, respectively, by CUNI and MISE IT
infrastructure
</td> </tr>
<tr>
<td>
Backup policy
</td>
<td>
2 onsite and 1 offsite copies in sync
</td> </tr>
<tr>
<td>
**#:** 9
</td>
<td>
**Dataset name:** Contact list on minority heritage
</td> </tr>
<tr>
<td>
**Data identification**
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
Email addresses and contact details of organizations, institutions and key-
persons related to Roma cultural heritage in Hungary
</td> </tr>
<tr>
<td>
Source
</td>
<td>
ELTE networking
</td> </tr>
<tr>
<td>
Creation framework
</td>
<td>
REACH project – previous national and EU projects
</td> </tr>
<tr>
<td>
**Project implementation**
</td> </tr>
<tr>
<td>
Partners involved
</td>
<td>
ELTE
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP5: T5.2
</td> </tr>
<tr>
<td>
Delivery date
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Technical details**
</td> </tr>
<tr>
<td>
Dataset type
</td>
<td>
Alphanumeric data
</td> </tr>
<tr>
<td>
Native (interoperable formats)
</td>
<td>
xlsx (xls, cvs, txt)
</td> </tr>
<tr>
<td>
Estimated number of records
</td>
<td>
50
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Purpose and use of the dataset
</td>
<td>
Preparation of workshops, focus groups and participatory activities. Targeted
dissemination of results on Roma heritage.
</td> </tr>
<tr>
<td>
Linked outputs/products
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Intellectual property rights
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Distribution and sharing channels
</td>
<td>
This dataset is confidential, will only be used for REACH’s activities and
will not be published online or by any other means.
</td> </tr>
<tr>
<td>
Sensible information
</td>
<td>
Personal data
</td> </tr>
<tr>
<td>
Language(s)
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td> </tr>
<tr>
<td>
Data storage
</td>
<td>
The dataset will be preserved by ELTE IT infrastructure
</td> </tr>
<tr>
<td>
Backup policy
</td>
<td>
2 onsite and 1 offsite copies in sync
</td> </tr> </table>
<table>
<tr>
<th>
**#:** 10
</th>
<th>
**Dataset name:** Contact list on institutional heritage
</th> </tr>
<tr>
<td>
**Data identification**
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
Email addresses and contact details of cultural institutions and public
administrations
</td> </tr>
<tr>
<td>
Source
</td>
<td>
SPK networking
</td> </tr>
<tr>
<td>
Creation framework
</td>
<td>
REACH project – previous national and EU projects
</td> </tr>
<tr>
<td>
**Project implementation**
</td> </tr>
<tr>
<td>
Partners involved
</td>
<td>
SPK
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP4: T4.2 - WP5: T5.3
</td> </tr>
<tr>
<td>
Delivery date
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Technical details**
</td> </tr>
<tr>
<td>
Dataset type
</td>
<td>
Alphanumeric data
</td> </tr>
<tr>
<td>
Native (interoperable formats)
</td>
<td>
xlsx (xls, cvs, txt)
</td> </tr>
<tr>
<td>
Estimated number of records
</td>
<td>
50
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Purpose and use of the dataset
</td>
<td>
Preparation of workshops and focus groups.
Targeted dissemination of results on institutional heritage.
</td> </tr>
<tr>
<td>
Linked outputs/products
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Intellectual property rights
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Distribution and sharing channels
</td>
<td>
This dataset is confidential, will only be used for REACH’s activities and
will not be published online or by any other means.
</td> </tr>
<tr>
<td>
Sensible information
</td>
<td>
Personal data
</td> </tr>
<tr>
<td>
Language(s)
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td> </tr>
<tr>
<td>
Data storage
</td>
<td>
The dataset will be preserved by SPK IT infrastructure
</td> </tr>
<tr>
<td>
Backup policy
</td>
<td>
2 onsite and 1 offsite copies in sync
</td> </tr> </table>
<table>
<tr>
<th>
**#:** 11
</th>
<th>
**Dataset name** Contact list on rural heritage
</th> </tr>
<tr>
<td>
**Data identification**
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
Email addresses and contact details of local administrations, local
communities and citizens organizations related to rural cultural heritage in
Spain. Email addresses and contact details of volunteers for participatory
activities.
</td> </tr>
<tr>
<td>
Source
</td>
<td>
UGR networking
</td> </tr>
<tr>
<td>
Creation framework
</td>
<td>
REACH project – previous national and EU projects
</td> </tr>
<tr>
<td>
**Project implementation**
</td> </tr>
<tr>
<td>
Partners involved
</td>
<td>
UGR
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP4: T4.4 - WP5: T5.4
</td> </tr>
<tr>
<td>
Delivery date
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Technical details**
</td> </tr>
<tr>
<td>
Dataset type
</td>
<td>
Alphanumeric data
</td> </tr>
<tr>
<td>
Native (interoperable formats)
</td>
<td>
xlsx (xls, cvs, txt)
</td> </tr>
<tr>
<td>
Estimated number of records
</td>
<td>
100
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Purpose and use of the dataset
</td>
<td>
Preparation of workshops, focus groups and participatory activities. Targeted
dissemination of results on rural heritage.
</td> </tr>
<tr>
<td>
Linked outputs/products
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Intellectual property rights
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Distribution and sharing channels
</td>
<td>
This dataset is confidential, will only be used for REACH’s activities and
will not be published online or by any other means.
</td> </tr>
<tr>
<td>
Sensible information
</td>
<td>
Personal data
</td> </tr>
<tr>
<td>
Language(s)
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td> </tr>
<tr>
<td>
Data storage
</td>
<td>
The dataset will be preserved by UGR IT infrastructure
</td> </tr>
<tr>
<td>
Backup policy
</td>
<td>
2 onsite and 1 offsite copies in sync
</td> </tr> </table>
<table>
<tr>
<th>
**#:** 12
</th>
<th>
**Dataset name:** Contact list on small towns’ heritage
</th> </tr>
<tr>
<td>
**Data identification**
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
Email addresses and contact details of organizations, institutions and key-
persons related to small towns’ cultural heritage in Czech Republic and in
Italy
</td> </tr>
<tr>
<td>
Source
</td>
<td>
CUNI and MISE networking
</td> </tr>
<tr>
<td>
Creation framework
</td>
<td>
REACH project – previous national and EU projects
</td> </tr>
<tr>
<td>
**Project implementation**
</td> </tr>
<tr>
<td>
Partners involved
</td>
<td>
CUNI, MISE
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP4: T4.5 - WP5: T5.5
</td> </tr>
<tr>
<td>
Delivery date (estimated)
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Technical details**
</td> </tr>
<tr>
<td>
Dataset type
</td>
<td>
Alphanumeric data
</td> </tr>
<tr>
<td>
Native (interoperable formats)
</td>
<td>
xlsx (xls, cvs, txt)
</td> </tr>
<tr>
<td>
Estimated volume of data
</td>
<td>
50
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Purpose and use of the dataset
</td>
<td>
Preparation of workshops, focus groups and participatory activities. Targeted
dissemination of results on rural heritage.
</td> </tr>
<tr>
<td>
Linked outputs/products
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Intellectual property rights
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Distribution and sharing channels
</td>
<td>
This dataset is confidential, will only be used for REACH’s activities and
will not be published online or by any other means.
</td> </tr>
<tr>
<td>
Sensible information
</td>
<td>
Personal data
</td> </tr>
<tr>
<td>
Language(s)
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td> </tr>
<tr>
<td>
Data storage
</td>
<td>
The dataset will be preserved, respectively, by CUNI and MISE IT
infrastructure
</td> </tr>
<tr>
<td>
Backup policy
</td>
<td>
2 onsite and 1 offsite copies in sync
</td> </tr> </table>
<table>
<tr>
<th>
**#:** 13
</th>
<th>
**Dataset name:** REACH Newsletter Subscribers
</th> </tr>
<tr>
<td>
**Data identification**
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
Mailing list containing email addresses and names of all subscribers to the
REACH’s newsletter
</td> </tr>
<tr>
<td>
Source
</td>
<td>
This dataset is automatically generated when visitors register using the
newsletter form available on the project website.
</td> </tr>
<tr>
<td>
Creation framework
</td>
<td>
REACH project – Digital Meets Culture
</td> </tr>
<tr>
<td>
**Project implementation**
</td> </tr>
<tr>
<td>
Partners involved
</td>
<td>
PROMOTER
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP2
</td> </tr>
<tr>
<td>
Delivery date
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Technical details**
</td> </tr>
<tr>
<td>
Dataset type
</td>
<td>
Alphanumeric data
</td> </tr>
<tr>
<td>
Native (interoperable formats)
</td>
<td>
csv (xlsx, xls, txt)
</td> </tr>
<tr>
<td>
Estimated number of records
</td>
<td>
7000
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Purpose and use of the dataset
</td>
<td>
The mailing list will be used to disseminate, twice a year, the project
newsletter to a targeted audience.
</td> </tr>
<tr>
<td>
Linked outputs/products
</td>
<td>
REACH’s newsletter. An analysis of newsletter subscribers may be performed in
order to assess and improve the overall visibility of the project.
</td> </tr>
<tr>
<td>
Intellectual property rights
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Distribution and sharing channels
</td>
<td>
This dataset is confidential, will only be used for REACH’s newsletter and
will not be published online or by any other means.
</td> </tr>
<tr>
<td>
Sensible information
</td>
<td>
Personal data
</td> </tr>
<tr>
<td>
Language(s)
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td> </tr>
<tr>
<td>
Data storage
</td>
<td>
The dataset will be preserved by PROMOTER IT infrastructure
</td> </tr>
<tr>
<td>
Backup policy
</td>
<td>
2 onsite and 1 offsite copies in sync
</td> </tr>
<tr>
<td>
**#:** 14
</td>
<td>
**Dataset name:** Database of participatory activities good practices in CH
</td> </tr>
<tr>
<td>
**Data identification**
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
Collection of good practice examples of participatory activities in cultural
heritage
</td> </tr>
<tr>
<td>
Source
</td>
<td>
Online search and personal experiences from REACH’s working group
</td> </tr>
<tr>
<td>
Creation framework
</td>
<td>
REACH project – WP6
</td> </tr>
<tr>
<td>
**Project implementation**
</td> </tr>
<tr>
<td>
Partners involved
</td>
<td>
COVUNI, PROMOTER, UGR, ELTE, CUNI, SPK
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP3: T3.1 - WP6: T6.2
</td> </tr>
<tr>
<td>
Delivery date
</td>
<td>
November 2018
</td> </tr>
<tr>
<td>
**Technical details**
</td> </tr>
<tr>
<td>
Dataset type
</td>
<td>
Alphanumeric tabular data
</td> </tr>
<tr>
<td>
Native (interoperable formats)
</td>
<td>
SQL (xlsx, cvs, txt)
</td> </tr>
<tr>
<td>
Estimated number of records
</td>
<td>
150
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Purpose and use of the dataset
</td>
<td>
Identify results from previous projects, in terms of methodologies and
techniques used to carry out participatory activities in the cultural heritage
field.
</td> </tr>
<tr>
<td>
Linked outputs/products
</td>
<td>
http://open-heritage.eu
</td> </tr>
<tr>
<td>
Intellectual property rights
</td>
<td>
The original sources of information consulted will be cited on the individual
forms in the public website
</td> </tr>
<tr>
<td>
Distribution and sharing channels
</td>
<td>
The dataset will be published on http://open-heritage.eu and shared across
REACH’s outreach channels
</td> </tr>
<tr>
<td>
Sensible information
</td>
<td>
N/A. All data included in this dataset is already publicly available.
</td> </tr>
<tr>
<td>
Language(s)
</td>
<td>
English
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td> </tr>
<tr>
<td>
Data storage
</td>
<td>
The dataset will be hosted and preserved by PROMOTER IT infrastructure
</td> </tr>
<tr>
<td>
Backup policy
</td>
<td>
1 onsite, 1 online and 1 weekly offsite copies in sync
</td> </tr> </table>
<table>
<tr>
<th>
**#:** 15
</th>
<th>
**Dataset name:** Data from participation in workshops and pilots
</th> </tr>
<tr>
<td>
**Data identification**
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
Quantified description of participants in REACH’s activities
</td> </tr>
<tr>
<td>
Source
</td>
<td>
Meetings, workshops, focus groups and participatory activities.
</td> </tr>
<tr>
<td>
Creation framework
</td>
<td>
REACH project - WP5 - Participatory Pilots
</td> </tr>
<tr>
<td>
**Project implementation**
</td> </tr>
<tr>
<td>
Partners involved
</td>
<td>
ELTE, SPK, CUNI, UGR
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP5: T5.2, T5.3, T5.4, T5.5 – WP6: T6.1
</td> </tr>
<tr>
<td>
Delivery date
</td>
<td>
April 2020
</td> </tr>
<tr>
<td>
**Technical details**
</td> </tr>
<tr>
<td>
Dataset type
</td>
<td>
Alphanumeric tabular data
</td> </tr>
<tr>
<td>
Native (interoperable formats)
</td>
<td>
xlsx (xls, cvs, txt)
</td> </tr>
<tr>
<td>
Estimated number of records
</td>
<td>
40
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Purpose and use of the dataset
</td>
<td>
</td> </tr>
<tr>
<td>
Linked outputs/products
</td>
<td>
This dataset will be primarily used to produce the D6.3 deliverable and
possibly for academic publications.
</td> </tr>
<tr>
<td>
Intellectual property rights
</td>
<td>
Linked outputs will be covered by a Creative Commons At-
tribution-Non-commercial 4.0 License
(http://creativecommons.org/licenses/by-nc/4.0
</td> </tr>
<tr>
<td>
Distribution and sharing channels
</td>
<td>
This dataset is confidential, will only be used for research purposes and will
not be published online or by any other means.
</td> </tr>
<tr>
<td>
Sensible information
</td>
<td>
Personal data, with consent gained from data subjects using individuals
informed consent forms. In linked outputs, sensible data will be anonymised or
pseudonymised.
</td> </tr>
<tr>
<td>
Language(s)
</td>
<td>
English
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td> </tr>
<tr>
<td>
Data storage
</td>
<td>
The dataset will be preserved by UGR IT infrastructure
</td> </tr>
<tr>
<td>
Backup policy
</td>
<td>
2 onsite and 1 offsite copies in sync
</td> </tr> </table>
<table>
<tr>
<th>
**#:** 16
</th>
<th>
**Dataset name:** Informed consent forms
</th> </tr>
<tr>
<td>
**Data identification**
</td> </tr>
<tr>
<td>
Dataset description
</td>
<td>
Informed consent forms signed by people involved in local encounters
</td> </tr>
<tr>
<td>
Source
</td>
<td>
Structured interviews carried out during pilots’ meetings and workshops
</td> </tr>
<tr>
<td>
Creation framework
</td>
<td>
REACH project - WP5 - Participatory Pilots
</td> </tr>
<tr>
<td>
**Project implementation**
</td> </tr>
<tr>
<td>
Partners involved
</td>
<td>
ELTE, SPK, CUNI, UGR, COVUNI
</td> </tr>
<tr>
<td>
Related WP(s) and task(s)
</td>
<td>
WP5: T5.2, T5.3, T5.4, T5.5
</td> </tr>
<tr>
<td>
Delivery date
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Technical details**
</td> </tr>
<tr>
<td>
Dataset type
</td>
<td>
Paper forms
</td> </tr>
<tr>
<td>
Native (interoperable formats)
</td>
<td>
docx (doc, rtf, odt)
</td> </tr>
<tr>
<td>
Estimated number of records
</td>
<td>
200
</td> </tr>
<tr>
<td>
**Data exploitation and sharing**
</td> </tr>
<tr>
<td>
Purpose and use of the dataset
</td>
<td>
To deal with data protection and confidentiality on information provided by
people involved in REACH’s focus groups.
</td> </tr>
<tr>
<td>
Linked outputs/products
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Intellectual property rights
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Distribution and sharing channels
</td>
<td>
This dataset is confidential, will only be used during REACH’s activities and
will not be published online or by any other means.
</td> </tr>
<tr>
<td>
Sensible information
</td>
<td>
Personal data
</td> </tr>
<tr>
<td>
Language(s)
</td>
<td>
Hungarian, Czech, Spanish, German, English
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td> </tr>
<tr>
<td>
Data storage
</td>
<td>
Each partner will preserve the forms signed during its own activities.
</td> </tr>
<tr>
<td>
Backup policy
</td>
<td>
N/A
</td> </tr> </table>
### 5\. CONCLUSION
The purpose of this document, the Data Management Plan (DMP), is to provide an
overview of the datasets that will be collected, processed and/or generated
during the project, along with the related challenges and issues that need to
be taken into consideration and a plan for managing data life cycle. The
following issues have been covered:
* datasets to be collected, processed or generated;
* methodology and standards will be applied;
* products or outputs will be produced from these datasets;
* whether data will be publicly shared or for internal use only; • how sensitive personal data will be treated;
* how data will be curated and preserved.
Even if the main effort in data collection will come from the pilots, nearly
all project partners will be collectors and/or producers of data, which
implies specific responsibilities that have been outlined. Special attention
will be given to ensure that published data does not break any IPR rules, nor
regulations and good practices related to ethics and personal data protection.
For this latter point, individual informed consent forms will be collected and
systematic anonymisation or pseudonymisation of personal data may be
implemented, as appropriate.
This version of DMP currently lists 16 datasets, mainly relating to the four
pilots carried out by ELTE, SPK, UGR and CUNI. These datasets can be divided
into three main groups: 4 outcomes from local encounters, 4 recordings of one-
to-one and focus groups conversations and 4 contact lists of people involved
in these activities. Additionally, pilots’ workshops and focus groups will
also produce the dataset #15, about participations, that will be managed by
UGR, and the dataset #16, informed consent forms, whose final responsible is
COVUNI.
The list of subscribers to REACH’s newsletter (#13) will be collected and
preserved by partner Promoter, that will also take care of hosting the
database of participatory activities good practices in CH (#14), created by a
combined effort of almost all project partners.
Due to the nature of REACH project, as a Coordination and Support Action that
will bring together and collect personal and sensitive data from a wide-
ranging network of subjects involved in cultural heritage – local
administrations, cultural heritage professionals, academic experts, arts
practitioners, professionals in archives and galleries, associations, interest
groups and policy-makers - most datasets will not be published online or by
any other means; the only exception being dataset #14, that will be available
at https://open-heritage.eu.
Regarding delivery dates, outcomes and recordings from local encounters will
be collected up to month 27 (January 2020); the database of good practices
from previous projects will be published in month 12 (November 2018); at month
30 (April 2020), the data from participation in workshops and pilots will be
integrated into deliverable D6.3. The other datasets do not have specific
delivery dates.
Following the EU’s guidelines, this Data Management Plan will be reviewed on a
regular basis. It will be reissued annually (in Month 18 and 30) and submitted
to the Commission as part of the Periodic Report at the halfway stage and end
of the REACH project.
With the submission of this Data Management Plan, in month 6, as initially
planned, the REACH project has achieved milestone MS13, but more than that, it
has provided a structure and clear guidance for project partners to use when
undertaking project activity.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1345_CAPTAIN_769830.md
|
# EXECUTIVE SUMMARY
The goal of this deliverable is to define the initial data sets to be used in
CAPTAIN project along with processes associated with those data sets and user
rights definition according to GDPR. This document is designed as a set of
tables, defining data sets and guidelines associated with each type of data
set and processes of data collection, storage, publication and security.
This report complements the provisions of the Grant Agreement and its annexes
as well as the Consortium Agreement, which has been already signed by all the
beneficiaries, in that it clearly defines the data sets of the project and
processes associated.
# INTRODUCTION
The goal of CAPTAIN is to offer assistance through a very comfortable,
intuitive, transparent yet effective interaction paradigm: by transforming the
home itself into a transparent and tangible interface where instructions are
projected onto the real context augmenting it when and where needed while
interaction occurs by touching or moving real life objects.
CAPTAIN combines emerging non-invasive technology to deliver a radically new
ICT based system that empowers and motivates older adults in need of guidance
and care, through:
* Projective Augmented Reality (PAR). Although PAR has been a fairly well studied research field for many years, there has been hardly any use for assistive technology for older adults.
* Gamified-coaching experience. A crucial part of this is the development of the virtual coach, which provides a continuous communication and empowerment mechanism across the CAPTAIN experience. ● Non-invasive physiological and emotional data analysis from facial micro-expressions and human body pose analysis thanks to scalable, robust, and accurate deep learning and artificial intelligence technologies.
* Non-invasive movement and gait data analysis through real-time location of the person from 3D sensors, which will be used to identify seniors’ movements
The data coming from the cameras, pico-projectors, sensors, etc. integrated in
the home environment will be collected and analysed through the automated
reasoning and autonomous learning algorithms.
In order to ensure integration with existing IT ecosystems and maximise
interoperability, CAPTAIN will develop an open source API (Application
Programming Interface). This way, through the API it will be possible to
access existing building blocks and expose all the CAPTAIN’s functionalities
(inputs, output, intermediate inferences and algorithms results) to third
party solutions (both research projects or commercial products).
This document is the CAPTAIN data management plan (DMP). The DMP describes the
data management life cycle for all datasets to be collected, processed and/or
generated by the research project. The CAPTAIN DMP describes, among others:
* The handling of data during and after the project
* The type of data that will be collected, processed, or gathered
* How the data will be pseudonymized and secured
* Whether and how the data will be made (openly) accessible
* How the data is stored; all above aka “preventive measures taken to ensure protection”
* Role of Data Management Official
* “Reaction measures” in case of data breach
This version of the document is the first one to be published and due to the
living matter of it as well as evolving nature of the project itself, going to
be updated regularly with corresponding changes and final version of the DMP
is due to M36 of the project.
As a separate document to the DMP, the CAPTAIN Ethics policy as well as Study
Plan has been designed (D1.2, WP10 and WP7 deliverables).
The procedures described in this DMP shall be followed by all members of the
consortium and ensure that data on human participants are collected, stored,
transferred and used in a secure setting; that use of the data is compliant
with ethic-legal requirements (including signed informed consent, ethics
approval, and the applicable data protection laws, furthermore the EU data
protection regulation, which is applicable from May 2018). Management of
datasets that include personal information and health information of data
subjects will be compliant with the General Data Protection Regulation (GDPR,
Regulation (EU) 2016/679). The GDPR is a regulation by which the European
Parliament, the European Council and the European Commission intend to
strengthen and unify data protection for individuals within the European Union
(EU).
Despite the fact that GDPR will affect each country legislation in different
ways, CAPTAIN will follow data minimisation principle, reflected in GDPR,
despite of location – data collected will be adequate, relevant and limited to
what is necessary in relation to the purposes for which they are processed.
# DATA SETS
CAPTAIN collected data sets should be divided by accessibility, storage
location, lifespan and access criteria.
By accessibility data sets are divided as:
* Restricted Data – data sets accessible only by Data Subjects and defined CAPTAIN Administrators roles. These data sets are stored locally in CAPTAIN appliances in an encrypted way and should not leave the appliance at any time.
* Service Data – data sets used by CAPTAIN components for providing its functionality and services. From research perspectives these data sets could be useful although at the time of this DMP creation it has not been clearly requested by any institution. Proposed to be pseudonymised and stored in CAPTAIN local storage if not requested or deleted immediately after processing.
* Pseudo-Opened Data - data sets collected for research purposes and/or CAPTAIN service provision. These data should be pseudonymised and stored in CAPTAIN Cloud and accessible for institutional research purposes.
Restricted Data should be linked to Service Data and Pseudo-Opened Data in
such a way that Data Subject at any time could execute his/her rights
regarding collected data, per GDPR.
Exact data linkage mechanism will be defined as long as CAPTAIN project
evolves in time.
By storage location data sets are divided as:
* Local Data Sets – data sets stored in encrypted way in secured space inside of CAPTAIN appliance and never leave the CAPTAIN installation location. These are Restricted Data and Service Data sets.
* External Data sets – data sets aimed for research and/or service purposes and stored in pseudonymised way at CAPTAIN Cloud. Nothing prevents temporarily store these data sets inside CAPTAIN for internal processing, but their final destination will be the CAPTAIN cloud. PseudoOpened Data sets fall under this definition.
Details about storages and their implementation will be covered by CAPTAIN
architecture and reflected in DMP as project evolves in time.
By lifespan data sets are divided as:
* Temporal data sets – data sets collected during CAPTAIN development sprints. These data sets are specifically to be used for development purposes and should be deleted after CAPTAIN release/deployment. Nothing prevents the deletion of any part or all temporal data sets at any time prior to the M36 milestone.
* “Permanent” data sets – data sets to be collected after CAPTAIN release and used in deployed CAPTAIN appliances. These data sets will follow the lifetime rules set in GDPR.
CAPTAIN should define several roles with various data access rights, allowing
various personal to access collected data for processing purposes. The
following roles are identified in CAPTAIN with corresponding data sets access
criteria:
* Data Subject/Nominated Proxy – CAPTAIN user or person representing him/her. Should be able to access all data sets related to him/her and execute his/her right against these data.
* Administrator – technical role, responsible for CAPTAIN HW/SW installation, configuration, maintenance and support. Only Service Data sets are accessible if required by this role.
* “Clinical” – CAPTAIN power user, responsible for CAPTAIN services/utilities configuration, management and Data Subject activities monitoring. Service and Pseudo-Opened Data sets should be accessible by this role.
* Researcher – Pseudo-Opened Data Sets consumer, using these sets for research and publishing purposes.
3.1 RESTRICTED DATA SETS
1. Data Subject ID Personal Data
<table>
<tr>
<th>
**DATA SET REFERENCE NAME**
</th>
<th>
**DataSubjectID**
</th> </tr>
<tr>
<td>
**DATA SET DESCRIPTION**
</td> </tr>
<tr>
<td>
**Generic description**
</td> </tr>
<tr>
<td>
CAPTAIN Data set, targeted for Data Subject unique identification as CAPTAIN
appliance user:
● CAPTAIN appliance ID – ID linked to Data Subject and pseudonymising him/her
at the same time. **No Name/Family Names nor National ID will be collected** .
● Links on Service Data and Pseudo-Opened Data
</td> </tr>
<tr>
<td>
**Origin of data**
</td> </tr>
<tr>
<td>
Data Subject him/herself, after consent is given.
</td> </tr> </table>
<table>
<tr>
<th>
**Nature and scale of data**
</th> </tr>
<tr>
<td>
It is expected that the datasets will be set during CAPTAIN Activation and
assignment to dedicated Data Subject. Specified data sets will be used for
unique identification of specified below data sets and by no means opened to
other than Data Subjects and CAPTAIN appliance corresponding roles. _Data
Format:_ TBD.
</td> </tr>
<tr>
<td>
**To whom the dataset could be useful**
</td> </tr>
<tr>
<td>
The collected data is not targeted for external usage.
</td> </tr>
<tr>
<td>
**Related scientific publication(s)**
</td> </tr>
<tr>
<td>
N/A
</td> </tr>
<tr>
<td>
**Indicative existing similar data sets** (including possibilities for
integration and reuse)
</td> </tr>
<tr>
<td>
N/A
</td> </tr>
<tr>
<td>
**STANDARDS AND METADATA**
</td> </tr>
<tr>
<td>
N/A
</td> </tr>
<tr>
<td>
**DATA SHARING**
</td> </tr>
<tr>
<td>
**Access type**
</td> </tr>
<tr>
<td>
As identified by CAPTAIN Roles.
</td> </tr>
<tr>
<td>
**Access Procedures**
</td> </tr>
<tr>
<td>
These data sets are not shared at any time and could be accessed by Data
Subject/Nominated Proxy for his/her Data Subjects Rights execution per GDPR.
</td> </tr>
<tr>
<td>
**Embargo periods** (if any)
</td> </tr>
<tr>
<td>
N/A
</td> </tr>
<tr>
<td>
**Technical mechanisms for dissemination**
</td> </tr>
<tr>
<td>
N/A
</td> </tr>
<tr>
<td>
**Necessary S/W and other tools for enabling re-use**
</td> </tr>
<tr>
<td>
TBD
</td> </tr>
<tr>
<td>
**Repository where data will be stored**
</td> </tr>
<tr>
<td>
Locally at CAPTAIN internal storage.
</td> </tr>
<tr>
<td>
**ARCHIVING AND PRESERVATION** (including storage and backup)
</td> </tr>
<tr>
<td>
**Data preservation period**
</td> </tr>
<tr>
<td>
N/A
</td> </tr>
<tr>
<td>
**Approximated end volume of data**
</td> </tr>
<tr>
<td>
The dataset volume is expected to be TBD.
</td> </tr>
<tr>
<td>
**Indicative associated costs for data archiving and preservation**
</td> </tr>
<tr>
<td>
TBD.
</td> </tr>
<tr>
<td>
**Indicative plan for covering the above costs**
</td>
<td>
</td> </tr>
<tr>
<td>
Small one-time costs covered by CAPTAIN.
</td>
<td>
</td> </tr>
<tr>
<td>
**PARTNERS ACTIVITIES AND RESPONSIBILITIES**
</td>
<td>
</td> </tr>
<tr>
<td>
**Partner Owner / Data Collector**
</td>
<td>
Data Subject/Nominated Proxy
</td> </tr>
<tr>
<td>
**Partner in charge of the data analysis**
</td>
<td>
</td> </tr>
<tr>
<td>
**Partner in charge of the data storage**
</td>
<td>
</td> </tr>
<tr>
<td>
**WPs and Tasks**
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
</td> </tr> </table>
2. Data Protection
Restricted Data Set is protected from unauthorised access by all means,
including encryption and anonymization. At the same time, upon request by Data
Subject, corresponding all types of datasets could be fetched for Data
Subjects rights execution. Residing these data sets locally at CAPTAIN
appliance secure storage adds yet another security level.
3.2 SERVICE DATA SETS
1. Data Subject Personal Data
<table>
<tr>
<th>
**DATA SET REFERENCE NAME**
</th>
<th>
**DataSubject**
</th> </tr>
<tr>
<td>
**DATA SET DESCRIPTION**
</td> </tr>
<tr>
<td>
**Generic description**
</td> </tr>
<tr>
<td>
CAPTAIN Data set, targeted for Data Subject personal information such as but
not limited to:
* Age
* Sex
* Location
* “Diagnosis”
* Activities Recommendations
* Online Services identifiers
* Photos made by CAPTAIN, if such function is available
* Link from CAPTAIN appliance ID – used for chaining data sets. ● Links on Service Data and Pseudo-Opened Data
</td> </tr>
<tr>
<td>
**Origin of data**
</td> </tr>
<tr>
<td>
Data Subject him/herself, after consent is given.
</td> </tr>
<tr>
<td>
**Nature and scale of data**
</td> </tr>
<tr>
<td>
It is expected that the datasets will be collected during CAPTAIN everyday
usage/activity. _Data Format:_ TBD.
</td> </tr>
<tr>
<td>
**To whom the dataset could be useful**
</td> </tr>
<tr>
<td>
Part of the collected data could be useful for researchers (age/location,
etc.), while another part (identifiers/photos, etc.) could be used by various
CAPTAIN components, API for providing corresponding services.
</td> </tr> </table>
<table>
<tr>
<th>
**Related scientific publication(s)**
</th> </tr>
<tr>
<td>
N/A
</td> </tr>
<tr>
<td>
**Indicative existing similar data sets** (including possibilities for
integration and reuse)
</td> </tr>
<tr>
<td>
N/A
</td> </tr>
<tr>
<td>
**STANDARDS AND METADATA**
</td> </tr>
<tr>
<td>
N/A
</td> </tr>
<tr>
<td>
**DATA SHARING**
</td> </tr>
<tr>
<td>
**Access type**
</td> </tr>
<tr>
<td>
Given the nature of the data set part of the data should never be shared,
while some of it could be accessed externally upon request for research
purposes.
</td> </tr>
<tr>
<td>
**Access Procedures**
</td> </tr>
<tr>
<td>
Access procedures TBD.
</td> </tr>
<tr>
<td>
**Embargo periods** (if any)
</td> </tr>
<tr>
<td>
N/A
</td> </tr>
<tr>
<td>
**Technical mechanisms for dissemination**
</td> </tr>
<tr>
<td>
N/A
</td> </tr>
<tr>
<td>
**Necessary S/W and other tools for enabling re-use**
</td> </tr>
<tr>
<td>
TBD
</td> </tr>
<tr>
<td>
**Repository where data will be stored**
</td> </tr>
<tr>
<td>
Locally at CAPTAIN internal storage.
</td> </tr>
<tr>
<td>
**ARCHIVING AND PRESERVATION** (including storage and backup)
</td> </tr>
<tr>
<td>
**Data preservation period**
</td> </tr>
<tr>
<td>
N/A
</td> </tr>
<tr>
<td>
**Approximated end volume of data**
</td> </tr>
<tr>
<td>
The dataset volume is expected to be TBD.
</td> </tr>
<tr>
<td>
**Indicative associated costs for data archiving and preservation**
</td> </tr>
<tr>
<td>
TBD.
</td> </tr>
<tr>
<td>
**Indicative plan for covering the above costs**
</td> </tr>
<tr>
<td>
Small one-time costs covered by CAPTAIN.
</td> </tr>
<tr>
<td>
**PARTNERS ACTIVITIES AND RESPONSIBILITIES**
</td> </tr>
<tr>
<td>
**Partner Owner / Data Collector**
</td>
<td>
Data Subject/Nominated Proxy during CAPTAIN activation and everyday usage
</td> </tr>
<tr>
<td>
**Partner in charge of the data analysis**
</td>
<td>
</td> </tr>
<tr>
<td>
**Partner in charge of the data storage**
</td>
<td>
</td> </tr>
<tr>
<td>
**WPs and Tasks**
</td> </tr>
<tr>
<td>
</td> </tr> </table>
1. Projections Data
<table>
<tr>
<th>
**DATA SET REFERENCE NAME**
</th>
<th>
**Projections Data** 1
</th> </tr>
<tr>
<td>
**DATA SET DESCRIPTION**
</td> </tr>
<tr>
<td>
**Generic description**
</td> </tr>
<tr>
<td>
This dataset, in case of projection device fixed on the ceiling, is required
for the projection mapping pipeline, specifically the mesh of the Data
Subject’s environment as input.
</td> </tr>
<tr>
<td>
**Origin of data**
</td> </tr>
<tr>
<td>
2/3D Projection devices, fixed in key locations in the lab/home (for example,
the ceiling, wall).
</td> </tr>
<tr>
<td>
**Nature and scale of data**
</td> </tr>
<tr>
<td>
The raw data used to compute the mesh depends on the desired outcome:
* 2D Projections: o For dynamic objects (the participant’s body or a moving object), raw data are the live point clouds of the living room (depth map). This amount of data is huge: 640x480 16bit depth images at 30fps.
o For static technique the raw data are one 640x480 16-bit depth image or a
sequence of images of the living room illuminated by structured lights. In the
latter case, the images are captured once for all (offline) and the amount of
data is about 20 or 30 1080p or 720p images.
* 3D Projections: o Same as above plus the user’s head position _Data Format: TBD_
</td> </tr>
<tr>
<td>
**To whom the dataset could be useful**
</td> </tr>
<tr>
<td>
TBD
</td> </tr>
<tr>
<td>
**Related scientific publication(s)**
</td> </tr>
<tr>
<td>
N/A
</td> </tr>
<tr>
<td>
**Indicative existing similar data sets** (including possibilities for
integration and reuse)
</td> </tr>
<tr>
<td>
N/A
</td> </tr>
<tr>
<td>
**STANDARDS AND METADATA**
</td> </tr>
<tr>
<td>
The dataset will be accompanied with detailed documentation of its contents.
</td> </tr>
<tr>
<td>
**DATA SHARING***
</td> </tr>
<tr>
<td>
**Access type**
</td> </tr>
<tr>
<td>
The dataset could become publicly available after pseudonymisation. The
inclusion of a participant’s data in the public part of this dataset will be
done on the basis of appropriate informed consent to data publication. Data
sets will be pseudonymised prior to access granted.
</td> </tr>
<tr>
<td>
**Access Procedures**
</td> </tr>
<tr>
<td>
No access will be granted unless specific request will be done by partners
and/or research institutions.
</td> </tr>
<tr>
<td>
**Embargo periods** (if any)
</td> </tr>
<tr>
<td>
N/A
</td> </tr>
<tr>
<td>
**Technical mechanisms for dissemination**
</td> </tr>
<tr>
<td>
N/A
</td> </tr>
<tr>
<td>
**Necessary S/W and other tools for enabling re-use**
</td> </tr>
<tr>
<td>
N/A
</td> </tr>
<tr>
<td>
**Repository where data will be stored**
</td> </tr>
<tr>
<td>
Local as well as external repositories should be used.
</td> </tr>
<tr>
<td>
**ARCHIVING AND PRESERVATION** (including storage and backup)
</td> </tr>
<tr>
<td>
**Data preservation period**
</td> </tr>
<tr>
<td>
This is temporal data set and corresponding lifespan should be applied.
</td> </tr>
<tr>
<td>
**Approximated end volume of data**
</td> </tr>
<tr>
<td>
TBD
</td> </tr>
<tr>
<td>
**Indicative associated costs for data archiving and preservation**
</td> </tr>
<tr>
<td>
TBD
</td> </tr>
<tr>
<td>
**Indicative plan for covering the above costs**
</td> </tr>
<tr>
<td>
N/A
</td> </tr>
<tr>
<td>
**PARTNERS ACTIVITIES AND RESPONSIBILITIES**
</td> </tr>
<tr>
<td>
**Partner Owner / Data Collector**
</td>
<td>
**HOL**
</td> </tr>
<tr>
<td>
**Partner in charge of the data analysis**
</td>
<td>
</td> </tr>
<tr>
<td>
**Partner in charge of the data storage**
</td>
<td>
</td> </tr>
<tr>
<td>
**WPs and Tasks**
</td> </tr>
<tr>
<td>
TBD
</td> </tr> </table>
1. Data Protection
Service Data Sets are intended to hold internal service data, originally not
aimed for exposition outside of the CAPTAIN. While, upon request, data sets
could be exposed externally, following clear consent provided by Data
Subjects. Data sets should be stored in encrypted form and at the same time
specific processes will be defined for participant data sets with regard to
fetching, identification of all data relating to a specific participant, and
allowing Data Subject rights execution with all corresponding data sets.
3.3 PSEUDO-OPENED DATA SETS
1. BodyGestureGait
<table>
<tr>
<th>
**DATA SET REFERENCE NAME**
</th>
<th>
**BodyGestureGait**
</th> </tr>
<tr>
<td>
**DATA SET DESCRIPTION**
</td> </tr>
<tr>
<td>
**Generic description**
</td> </tr>
<tr>
<td>
Dataset for human identification, postures, gestures, gait and balance
analysis tracking experiments, especially during activities of daily living
(ADL) where MentorAge will be the main sensor.
</td> </tr>
<tr>
<td>
**Origin of data**
</td> </tr>
<tr>
<td>
The dataset will be collected using the MentorAge sensor during daily
activities
</td> </tr>
<tr>
<td>
**Nature and scale of data**
</td> </tr>
<tr>
<td>
It is expected that the datasets will be collected mainly during activities of
daily living in a real-life environment. The seniors will be asked to follow a
protocol containing daily activities, interaction with smart devices, virtual
tangible surfaces and the other CAPTAIN systems. Focus will be given to the
system under experimentation but always as part of a holistic daily living
scenario to ensure ecological validity. The session for each participant will
last approx. 60-90 minutes.
The dataset will contain the user’s silhouette as this is provided by the
MentorAge SDK (Skeleton with bones and joints) as well as continuous
collection of point clouds (3D depth data) of the space. It will be
investigated further whether the RGB and depth image will be collected.
_Data Format:_ PNG/JPG or mp4 for images (both RGB and depth), JSON for the
coordinates of the joints and bones, XML or TXT for annotations, data array
for depth information (2D matrices with depth information).
The dataset will be on the order of ~2-5 GB per recording hour.
</td> </tr>
<tr>
<td>
**To whom the dataset could be useful**
</td> </tr>
<tr>
<td>
The collected data will be used for the development and evaluation of the
human activity monitoring, the gait analysis and the location of the senior in
order to fulfil the “whenever and wherever is needed” requirement of the
CAPTAIN project. The different parts of the dataset could be useful in the
benchmarking of a series of human tracking methods, focusing either on human
identification, on posture and gesture analysis and tracking as well as in
detecting, if possible, symptoms and signs that are related to health status.
</td> </tr>
<tr>
<td>
**Related scientific publication(s)**
</td> </tr>
<tr>
<td>
The dataset will accompany our research results in the field of human activity
monitoring and gait analysis of people in real homes. Corresponding
publications are:
Konstantinidis, E. I., Billis, A. S., Dupre, R., Fernández Montenegro, J. M.,
Conti, G., Argyriou, V., & Bamidis, P. D. (2017). IoT of active and healthy
ageing: cases from indoor location analytics in the wild. Health and
Technology, 7(1), 41–49. https://doi.org/10.1007/s12553-016-0161-3
Konstantinidis, E. I., Antoniou, P. E., Bamparopoulos, G., & Bamidis, P. D.
(2014). A lightweight framework for transparent cross platform communication
of controller data in ambient assisted living environments. Information
Sciences, 300, 124–139. http://doi.org/10.1016/j.ins.2014.10.070
</td> </tr> </table>
<table>
<tr>
<th>
Konstantinidis, E. I., Billis, A. S., Bratsas, C., & Bamidis, P. D. (2016).
Active and Healthy Ageing Big Dataset streaming on demand. In Proceedings of
the 18th International Conference on HumanComputer Interaction. Toronto,
Canada.
</th> </tr>
<tr>
<td>
**Indicative existing similar data sets** (including possibilities for
integration and reuse)
</td> </tr>
<tr>
<td>
It should be noted that although several RGB-D datasets stemming from 3D
sensor dealing with human activity analysis are publicly available (see
datasets below), they must be further assessed regarding their compatibility
with the CAPTAIN project requirements (the sensor is expected to be on the
ceiling rather in front of the television as usual in most of the available
datasets).
_G3D_ ( _http://dipersec.king.ac.uk/G3D/G3D.html_ ) : G3D dataset contains a
range of gaming actions captured with Microsoft Kinect. The Kinect enabled us
to record synchronised video, depth and skeleton data. The dataset contains 10
subjects performing 20 gaming actions: _punch right, punch left, kick right,
kick left, defend, golf swing, tennis swing forehand, tennis swing backhand,
tennis serve, throw bowling ball, aim and fire gun, walk, run, jump, climb,
crouch, steer a car, wave, flap and clap_ .
_MSRC-12 Kinect gesture dataset_ ( _http://research.microsoft.com/en-_
_us/um/cambridge/projects/msrc12/_ ) : The Microsoft Research Cambridge-12
Kinect gesture data set consists of sequences of human movements, represented
as body-part locations, and the associated gesture to be recognized by the
system.
_RGB-D Person Re-identification Dataset_ ( _http://old.iit.it/en/datasets-and-
code/datasets/rgbdid.html_ ) : A new dataset for person re-identification
using depth information. The main motivation is that the standard techniques
(such as _SDALF_ ) fail when the individuals change their clothing,
therefore they cannot be used for long-term video surveillance. Depth
information is the solution to deal with this problem because it stays
constant for a longer period of time.
_DGait Database_ ( _http://www.cvc.uab.es/DGaitDB/Summary.html_ ) : DGait is
a new gait database acquired with a depth camera. This database contains
videos from 53 subjects walking in different directions.
</td> </tr>
<tr>
<td>
**SAMPLE DATA**
</td> </tr>
<tr>
<td>
**Subjects to be enrolled**
</td> </tr>
<tr>
<td>
For the algorithm’s training data collection (experimental dataset),
participants from the pilot sites in NIVELY, AUTH and APSS will be recruited.
</td> </tr>
<tr>
<td>
**Technology for Data Collection**
</td> </tr>
<tr>
<td>
The body gesture and gait dataset will be collected through the MentorAge
sensor (NIVELY) or any other similar technology that will be described in the
D2.2 - First version of system specifications
</td> </tr>
<tr>
<td>
**Protocol for Data Collection**
</td> </tr>
<tr>
<td>
Although this has not been defined at the time being (it will become clear
after the user stories are presented to the participants), a protocol asking
the participants performing movements similar to their activities of daily
living is expected (walking, sitting, washing dishes, etc.)
</td> </tr>
<tr>
<td>
**Potential Risk Associated with Data Collection**
</td> </tr>
<tr>
<td>
The collection of the experimental dataset will take place in a fully
controlled and safe environment in any of the partners NIVELY, AUTH, APSS
living labs.
</td> </tr>
<tr>
<td>
**STANDARDS AND METADATA**
</td> </tr> </table>
<table>
<tr>
<th>
The dataset will be accompanied with detailed documentation of its contents.
Indicative metadata include: a) description of the experimental setup and
procedure that led to the generation of the dataset, b) type of exercise in
case the dataset produced during exergames, c) documentation of the variables
recorded in the dataset, and d) annotated posture, action and activity.
</th> </tr>
<tr>
<td>
**DATA SHARING**
</td> </tr>
<tr>
<td>
**Access type**
</td> </tr>
<tr>
<td>
Only the data captured by a subset of the participants during the initial
phases by normal healthy control subjects could become publicly available,
while the rest of them will be private to serve the CAPTAIN R&D objectives.
The inclusion of a (normal healthy control) subject’s data in the public part
of this dataset will be done on the basis of appropriate informed consent to
data publication. It will be investigated further whether the silhouette
(coordinates of joints and bones) subset of the dataset could be publicly
available.
</td> </tr>
<tr>
<td>
**Access Procedures**
</td> </tr>
<tr>
<td>
For the portions of the dataset that will be made publicly available, a
respective web page will be created on the data management portal (the use of
the CAC-playback manager 2 will be assessed) that will provide a description
of the dataset, links to a download section and a playback possibility in case
the playback manager approach is followed. The private part of this dataset
will be stored at a specifically designated private space of AUTH, NIVELY,
APSS or other CAPTAIN partner, in dedicated hard disk drives, on which only
members of the AUTH, NIVELY, APSS or other CAPTAIN partner research team whose
work directly relates to these data will have access. For further CAPTAIN
partners to obtain access to these data, they should provide a proper request
to the AUTH/ NIVELY/APSS or other CAPTAIN partner primarily responsible,
including a justification over the need to have access to these data. Once
deemed necessary, AUTH/ NIVELY / APSS or other CAPTAIN partner will provide
the respective data portions to the partner.
CAC-playback manager: Konstantinidis, E. I., Billis, A. S., Bratsas, C., &
Bamidis, P. D. (2016). Active and Healthy Ageing Big Dataset streaming on
demand. In Proceedings of the 18th International Conference on Human-Computer
Interaction. Toronto, Canada.
</td> </tr>
<tr>
<td>
**Embargo periods** (if any)
</td> </tr>
<tr>
<td>
The applicable datasets will be publicly available 2 years after the end of
the project to allow the consortium prepare and submit the scientific
publications.
</td> </tr>
<tr>
<td>
**Technical mechanisms for dissemination**
</td> </tr>
<tr>
<td>
For the public part of the dataset, a link to this will be provided from the
Data management portal. The link will be provided in all relevant CAPTAIN
publications. A technical publication describing the dataset and acquisition
procedure will be published.
</td> </tr>
<tr>
<td>
**Necessary S/W and other tools for enabling re-use**
</td> </tr>
<tr>
<td>
The dataset will be designed to allow easy reuse with commonly available tools
and software libraries. In case of supporting the online playback of the
datasets, libraries for a variety of programming languages will be released
(e.g. _http://www.cac-framework.com/_ )
</td> </tr>
<tr>
<td>
**Repository where data will be stored**
</td> </tr>
<tr>
<td>
The public part of this dataset will be accommodated at the data management
portal of the project website.
</td> </tr>
<tr>
<td>
**ARCHIVING AND PRESERVATION** (including storage and backup)
</td> </tr>
<tr>
<td>
**Data preservation period**
</td> </tr>
<tr>
<td>
The public part of this dataset will be preserved online for as long as there
are regular downloads. After that it would be made accessible by request. The
private part of the dataset will be preserved by AUTH/NIVELY/APSS for the
period required by GDPR.
</td> </tr>
<tr>
<td>
**Approximated end volume of data**
</td> </tr>
<tr>
<td>
The dataset is expected to be several gigabytes, provided that each recording
hour is expected to be ~2-5 GB.
</td> </tr>
<tr>
<td>
**Indicative associated costs for data archiving and preservation**
</td> </tr>
<tr>
<td>
Probably two dedicated hard disk drives will be allocated for the dataset; one
for the public part and one for the private. In this case the costs associated
with its preservation will be according to the hardware cost (hard drives).
</td> </tr>
<tr>
<td>
**Indicative plan for covering the above costs**
</td> </tr>
<tr>
<td>
Small one-time costs covered by CAPTAIN.
</td> </tr>
<tr>
<td>
**PARTNERS ACTIVITIES AND RESPONSIBILITIES**
</td> </tr>
<tr>
<td>
**Partner Owner / Data Collector**
</td>
<td>
**NIVELY, AUTH, APSS**
</td> </tr>
<tr>
<td>
**Partner in charge of the data analysis**
</td>
<td>
**NIVELY, AUTH**
</td> </tr>
<tr>
<td>
**Partner in charge of the data storage**
</td>
<td>
**NIVELY, AUTH, APSS**
</td> </tr>
<tr>
<td>
**WPs and Tasks**
</td> </tr>
<tr>
<td>
The data are going to be collected within activities of WP3, WP4, WP6 and WP8,
to mainly serve the research efforts of T3.2, T4.1, T4.3, T4.4, T6.1 and T6.4.
</td> </tr> </table>
2. InGameMetrics
<table>
<tr>
<th>
**DATA SET REFERENCE NAME**
</th>
<th>
**in-GameMetrics**
</th> </tr>
<tr>
<td>
**DATA SET DESCRIPTION**
</td> </tr>
<tr>
<td>
**Generic description**
</td> </tr>
<tr>
<td>
This dataset will include in-game metrics and games’ performance of all the
sessions so as to facilitate subsequently analysis with respect to the
clinical assessment tests towards research on early signs of health
deterioration and physical and cognitive assessment (linear correlation of in-
game metrics with clinical assessment tests).
</td> </tr>
<tr>
<td>
**Origin of data**
</td> </tr>
<tr>
<td>
The dataset will be collected using the webFitForAll (wFFA) platform which
accommodate different types of serious gaming interventions.
</td> </tr> </table>
<table>
<tr>
<th>
**Nature and scale of data**
</th> </tr>
<tr>
<td>
The datasets will be collected during the serious gaming interventions of
wFFA. The serious games will ask the user to go under specific exercises or
gaming tests while mechanisms in the background will track and collect the
user’s performance. In the case of Exergames, the user will be asked to
perform specific exercises which will be captured by a 3D sensor.
The dataset will contain metrics like reaction time, player’s path / optimum
path, goal time, movement range, balance, min and max angles of movements,
wrong choices and much more in-game metrics that will arise during the design
requirements of the serious games.
_Data Format:_ RDF triples or JSON
The dataset is expected to be composed of 180 records per serious gaming
session.
</td> </tr>
<tr>
<td>
**To whom the dataset could be useful**
</td> </tr>
<tr>
<td>
The collected data will be used for the development and evaluation of the
Personalized Game Suite in terms of usability, acceptance and user’s
assessment. The different parts of the (semantically annotated where
applicable) dataset could be useful in the benchmarking of a series of serious
games, focusing either on detecting and assessing user’s physical and
cognitive state or in the effectiveness axis as the primary role of the
interventions. The latter will feed the **T5.2** which is intended to
dynamically recommend game adaptations for personalised and optimised use as
well as keeping the users in the “flow zone” which represents the feeling of
being complete and energized focus in an activity with a high level of
enjoyment and fulfilment towards increased adherence.
</td> </tr>
<tr>
<td>
**Related scientific publication(s)**
</td> </tr>
<tr>
<td>
The dataset will accompany our research results in the field of human activity
monitoring. AUTH team has an already existing dataset, including recordings
from elderly people going through Exergames, that will be used in analysis and
algorithms testing. Corresponding publications are:
Bamparopoulos, G., Konstantinidis, E., Bratsas, C., & Bamidis, P. D. (2016).
Towards exergaming commons: composing the exergame ontology for publishing
open game data. Journal of Biomedical Semantics, 7(1), Article nr 4.
http://doi.org/10.1186/s13326-016-0046-4
Konstantinidis, E., Bamparopoulos, G., & Bamidis, P. (2016). Moving Real
Exergaming Engines on the Web: The webFitForAll case study in an active and
healthy ageing living lab environment. IEEE
Journal of Biomedical and Health Informatics, 1–1.
http://doi.org/10.1109/JBHI.2016.2559787
</td> </tr>
<tr>
<td>
**Indicative existing similar data sets** (including possibilities for
integration and reuse)
</td> </tr>
<tr>
<td>
It should be noted that there are no serious games datasets available online.
To the best of our knowledge, the only open dataset regarding serious games,
and more specifically Exergames performed by older adults, is the one that
AUTH has published a couple of years before described in:
Bamparopoulos, G., Konstantinidis, E., Bratsas, C., & Bamidis, P. D. (2016).
Towards exergaming commons: composing the exergame ontology for publishing
open game data. Journal of Biomedical Semantics, 7(1), Article nr 4.
http://doi.org/10.1186/s13326-016-0046-4
</td> </tr>
<tr>
<td>
**SAMPLE DATA**
</td> </tr>
<tr>
<td>
**Subjects to be enrolled**
</td> </tr>
<tr>
<td>
**Although AUTH will use previous collected anonymized datasets for the
algorithms’ training, if new datasets are needed** participants from the pilot
sites in NIVELY, AUTH and APSS will be recruited.
</td> </tr> </table>
<table>
<tr>
<th>
**Technology for Data Collection**
</th> </tr>
<tr>
<td>
The data will be collected through the webFitForAll exergaming platform which
will be the physical and cognitive intervention component of the CAPTAIN
system.
</td> </tr>
<tr>
<td>
**Protocol for Data Collection**
</td> </tr>
<tr>
<td>
The users will be asked to follow a physical training protocol composed of:
webFitForAll is part of the broader framework of research on "active aging",
improving physical fitness for the elderly and vulnerable groups and, of
course, the fight against dementia. It is based on modern Information and
Communication Technologies (ICT) providing physical exercise to maintain good
physical fitness through an innovative, low-cost technology platform that
combines exercise with games.
It includes protocols of exercises aimed at the elderly and vulnerable groups,
specially designed by scientists skilled in the elderly. These exercises,
tailored to the specificities of the users, enhance aerobic fitness,
flexibility, balance and strength.
Aerobic, activates the body's circulatory and cardiovascular system. Improves
the performance of the heart both at rest and during exercise.
Flexibility, is necessary to avoid injuries. Create more flexibility and a
greater range of motion, increase blood flow, wellness and inner peace and
better posture.
Balance, improves static and functional balance, mobility, and enhance the
performance of their functional tasks in their everyday lives. Physical
training can improve balance and reduce the risk of falling.
Strength, helps to prevent the loss of muscle mass that accompanies aging
(prevention of sarcopenia), which is a major problem in the elderly, but also
in maintaining bone mass. Improving the musculoskeletal system increases
muscle mass, joint flexibility, and bone strength.
The basic level of protocol is set at an intensity of 50-60% of maximum heart
rate and reaches the third level by intensity about 70-85%. The protocol can
be adapted to the needs of the user.
</td> </tr>
<tr>
<td>
**Potential Risk Associated with Data Collection**
</td> </tr>
<tr>
<td>
The collection of the experimental dataset will take place in a fully
controlled and safe environment in any of the partners NIVELY, AUTH, APSS
living labs.
</td> </tr>
<tr>
<td>
**STANDARDS AND METADATA**
</td> </tr>
<tr>
<td>
The dataset will be accompanied with detailed documentation of its contents.
Indicative metadata include: a) description of the experimental setup and
procedure that led to the generation of the dataset, b) type of exercise or
game, c) documentation of the variables recorded in the dataset, and d)
semantic annotation based on existing ontologies.
</td> </tr>
<tr>
<td>
**DATA SHARING***
</td> </tr>
<tr>
<td>
**Access type**
</td> </tr>
<tr>
<td>
The dataset could become publicly available. The inclusion of a subject’s data
in the public part of this dataset will be done on the basis of appropriate
informed consent to data publication.
</td> </tr>
<tr>
<td>
**Access Procedures**
</td> </tr>
<tr>
<td>
A proposed access procedure is to automatically convert the acquired game
results to RDF triples and published on the web as open data, accessible
through a SPARQL Endpoint. In order to facilitate access, links to a download
section where the datasets will be downloaded as JSON files will be provided
if
</td> </tr> </table>
<table>
<tr>
<th>
required. The private part of this dataset will be stored at a specifically
designated private space of AUTH, in dedicated hard disk drives, on which only
members of the AUTH research team whose work directly relates to these data
will have access. For further partners to obtain access to these data, they
should provide a proper request to the AUTH primarily responsible, including a
justification over the need to have access to these data. Once deemed
necessary, AUTH will provide the respective data portions to the partner.
</th> </tr>
<tr>
<td>
**Embargo periods** (if any)
</td> </tr>
<tr>
<td>
The applicable datasets will be publicly available two years after the end of
the project to allow the consortium prepare and submit the scientific
publications.
</td> </tr>
<tr>
<td>
**Technical mechanisms for dissemination**
</td> </tr>
<tr>
<td>
For the public part of the dataset, a link to this, as well as to the SPARQL
endpoint will be provided from the Data management portal. The link and the
SPARQL endpoint will be provided in all relevant publications. A technical
publication describing the dataset and acquisition procedure will be
published.
</td> </tr>
<tr>
<td>
**Necessary S/W and other tools for enabling re-use**
</td> </tr>
<tr>
<td>
The dataset will be designed to allow easy reuse with commonly available tools
and software libraries.
</td> </tr>
<tr>
<td>
**Repository where data will be stored**
</td> </tr>
<tr>
<td>
The public part of this dataset will be accommodated at AUTH servers,
accessible by the SPARQL endpoint.
</td> </tr>
<tr>
<td>
**ARCHIVING AND PRESERVATION** (including storage and backup)
</td> </tr>
<tr>
<td>
**Data preservation period**
</td> </tr>
<tr>
<td>
The public part of this dataset will be preserved online for as long as there
are regular downloads. After that it would be made accessible by request.
</td> </tr>
<tr>
<td>
**Approximated end volume of data**
</td> </tr>
<tr>
<td>
The dataset is expected to be several hundreds of MB, provided that each
session is expected to produce a volume of ~80 KB of data.
</td> </tr>
<tr>
<td>
**Indicative associated costs for data archiving and preservation**
</td> </tr>
<tr>
<td>
The dataset will be stored in the serious gaming server hosted by AUTH. There
are no costs associated with its preservation. In the case that Microsoft
Azure will accommodate the serious gaming server, there will be an additional
cost of approximately 20 Euros (€) per month for one year
</td> </tr>
<tr>
<td>
**Indicative plan for covering the above costs**
</td> </tr>
<tr>
<td>
The costs associated with the data archiving and preservation will be covered
by the CAPTAIN project budget.
</td> </tr>
<tr>
<td>
**PARTNERS ACTIVITIES AND RESPONSIBILITIES**
</td> </tr>
<tr>
<td>
**Partner Owner / Data Collector**
</td>
<td>
**AUTH**
</td> </tr>
<tr>
<td>
**Partner in charge of the data analysis**
</td>
<td>
</td> </tr>
<tr>
<td>
**Partner in charge of the data storage**
</td>
<td>
</td> </tr>
<tr>
<td>
**WPs and Tasks**
</td> </tr>
<tr>
<td>
The data are going to be collected within activities of WP7, T4.4 and used in
T5.2 and WP8.
</td> </tr> </table>
3. SensorsData
<table>
<tr>
<th>
**DATA SET REFERENCE NAME**
</th>
<th>
**SensorsData**
</th> </tr>
<tr>
<td>
**DATA SET DESCRIPTION**
</td> </tr>
<tr>
<td>
**Generic description**
</td> </tr>
<tr>
<td>
For T4.1 and T 4.2 own data will be used for training. For research and WP5
purposes still will be useful to collect validation data; the amount of
validation data could be decided based on data sources and the legal issues
associated with it.
</td> </tr>
<tr>
<td>
**Origin of data**
</td> </tr>
<tr>
<td>
Data will be created based on user presence detection, user pose, user
identification as well as face emotion recognition, speech emotion
recognition, fuse speech and face emotion.
</td> </tr>
<tr>
<td>
**Nature and scale of data**
</td> </tr>
<tr>
<td>
Compressed video (such as H.264) at the resolution of the sensors. The bitrate
should be sufficient so that no artefacts are present in highly dynamic
situations (the effective bitrate depends on the camera that will be
integrated).
Validation data does not need to be large. A few minutes of footage in each
scene that should be supported is enough.
_Data Format:_
H.264 video and text-based annotations.
</td> </tr>
<tr>
<td>
**To whom the dataset could be useful**
</td> </tr>
<tr>
<td>
The dataset can be used to debug the emotion and pose detection algorithms in
real-life situations of the project and to fine tune pre-processing
parameters. Moreover, it can be used by the system integration team to
validate the integration.
</td> </tr>
<tr>
<td>
**Related scientific publication(s)**
</td> </tr>
<tr>
<td>
N/A
</td> </tr>
<tr>
<td>
**Indicative existing similar data sets** (including possibilities for
integration and reuse)
</td> </tr>
<tr>
<td>
nViso proprietary datasets (cannot be integrated nor reused)
</td> </tr>
<tr>
<td>
**SAMPLE DATA**
</td> </tr>
<tr>
<td>
**Subjects to be enrolled**
</td> </tr>
<tr>
<td>
The subjects depend on the problems that need to be solved.
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
**Technology for Data Collection**
</td>
<td>
</td> </tr>
<tr>
<td>
</td> </tr> </table>
<table>
<tr>
<th>
Camera used in the device.
</th> </tr>
<tr>
<td>
**Protocol for Data Collection**
</td> </tr>
<tr>
<td>
The collection protocol is defined case by case depending on the issues that
need to be solved (for instance if during the tests a problem in low light
condition is detected then the video should be captured in these conditions).
</td> </tr>
<tr>
<td>
**Potential Risk Associated with Data Collection**
</td> </tr>
<tr>
<td>
For this data the risks the collection protocol can be adapted as to minimize
the risks.
</td> </tr>
<tr>
<td>
**STANDARDS AND METADATA**
</td> </tr>
<tr>
<td>
The dataset will be accompanied with detailed documentation of its contents.
</td> </tr>
<tr>
<td>
**DATA SHARING***
</td> </tr>
<tr>
<td>
**Access type**
</td> </tr>
<tr>
<td>
Since very small amount of data is required for WP4 purposes it should be
possible to collect it in a way that it doesn't pose major ethical/privacy
problems. Since it is so small the video to be shared could even be reviewed
and explicit consent be given by the participants to share it.
</td> </tr>
<tr>
<td>
**Access Procedures**
</td> </tr>
<tr>
<td>
No continuous access of the data is necessary. Data can be transferred to the
relevant party when it is required.
</td> </tr>
<tr>
<td>
**Embargo periods** (if any)
</td> </tr>
<tr>
<td>
This data is for debug reason and should not be shared outside the scope of
debugging.
</td> </tr>
<tr>
<td>
**Technical mechanisms for dissemination**
</td> </tr>
<tr>
<td>
This data is not intended for dissemination.
</td> </tr>
<tr>
<td>
**Necessary S/W and other tools for enabling re-use**
</td> </tr>
<tr>
<td>
This data is used only of debug and no reuse is expected.
</td> </tr>
<tr>
<td>
**Repository where data will be stored**
</td> </tr>
<tr>
<td>
There is no need of long term storage.
</td> </tr>
<tr>
<td>
**ARCHIVING AND PRESERVATION** (including storage and backup)
</td> </tr>
<tr>
<td>
**Data preservation period**
</td> </tr>
<tr>
<td>
The data can be destroyed as soon as the debugging is completed.
</td> </tr>
<tr>
<td>
**Approximated end volume of data**
</td> </tr>
<tr>
<td>
The data collected is used to validate the performance and possibly tune some
pre-processing parameters. A few hours of total footage covering different
lightning conditions, environments, device setups and people should be more
than enough.
</td> </tr>
<tr>
<td>
**Indicative associated costs for data archiving and preservation**
</td> </tr>
<tr>
<td>
No preservation is expected.
</td> </tr>
<tr>
<td>
**Indicative plan for covering the above costs**
</td> </tr>
<tr>
<td>
The costs associated with the data archiving and preservation will be covered
by the CAPTAIN project budget.
</td> </tr>
<tr>
<td>
**PARTNERS ACTIVITIES AND RESPONSIBILITIES**
</td> </tr>
<tr>
<td>
**Partner Owner / Data Collector**
</td>
<td>
**NVISO, APSS**
</td> </tr>
<tr>
<td>
**Partner in charge of the data analysis**
</td>
<td>
</td> </tr>
<tr>
<td>
**Partner in charge of the data storage**
</td>
<td>
</td> </tr>
<tr>
<td>
**WPs and Tasks**
</td> </tr>
<tr>
<td>
WP4 and WP5
</td> </tr> </table>
4. DailyDiaryNutrition
<table>
<tr>
<th>
**DATA SET REFERENCE NAME**
</th>
<th>
**DailyDiaryNutrition**
</th> </tr>
<tr>
<td>
**DATA SET DESCRIPTION**
</td> </tr>
<tr>
<td>
**Generic description**
</td> </tr>
<tr>
<td>
This dataset will include the records created by users themselves, describing
their daily activities in nutrition - diet, food consumed, time, duration etc)
in plane text or in form of questionnaire.
</td> </tr>
<tr>
<td>
**Origin of data**
</td> </tr>
<tr>
<td>
The dataset will be collected using diary style application available via
appliance User Interface.
</td> </tr>
<tr>
<td>
**Nature and scale of data**
</td> </tr>
<tr>
<td>
The datasets will be collected regularly (assuming daily) by using users
goodwill (could be several times a day) or predefined reminders set. Form to
fill will be presented allowing users to write down their daily activities
related to nutrition.
The dataset will contain text and/or checkboxes with data associated.
_Data Format:_ TBD
</td> </tr>
<tr>
<td>
**To whom the dataset could be useful**
</td> </tr>
<tr>
<td>
The collected data will be used for the development, evaluation and correction
of the activities related services provided by the of the appliance. The
different parts of the dataset could be useful in the benchmarking of a
already provided services, focusing on efficiency of them.
</td> </tr>
<tr>
<td>
**Related scientific publication(s)**
</td> </tr>
<tr>
<td>
TBD
</td> </tr>
<tr>
<td>
**Indicative existing similar data sets** (including possibilities for
integration and reuse)
</td> </tr>
<tr>
<td>
TBD
</td> </tr>
<tr>
<td>
**SAMPLE DATA**
</td> </tr>
<tr>
<td>
**Subjects to be enrolled**
</td> </tr>
<tr>
<td>
TBD
</td> </tr> </table>
<table>
<tr>
<th>
</th> </tr>
<tr>
<td>
</td>
<td>
**Technology for Data Collection**
</td>
<td>
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
TBD
</td> </tr>
<tr>
<td>
**Protocol for Data Collection**
</td> </tr>
<tr>
<td>
TBD
</td> </tr>
<tr>
<td>
**Potential Risk Associated with Data Collection**
</td> </tr>
<tr>
<td>
TBD
</td> </tr>
<tr>
<td>
**STANDARDS AND METADATA**
</td> </tr>
<tr>
<td>
TBD
</td> </tr>
<tr>
<td>
**DATA SHARING***
</td> </tr>
<tr>
<td>
**Access type**
</td> </tr>
<tr>
<td>
The dataset could become publicly available after pseudonymisation. The
inclusion of a subject’s data in the public part of this dataset will be done
on the basis of appropriate informed consent to data publication.
</td> </tr>
<tr>
<td>
**Access Procedures**
</td> </tr>
<tr>
<td>
TBD
</td> </tr>
<tr>
<td>
**Embargo periods** (if any)
</td> </tr>
<tr>
<td>
The applicable datasets will be publicly available two years after the end of
the project to allow the consortium prepare and submit the scientific
publications.
</td> </tr>
<tr>
<td>
**Technical mechanisms for dissemination**
</td> </tr>
<tr>
<td>
TBD
</td> </tr>
<tr>
<td>
**Necessary S/W and other tools for enabling re-use**
</td> </tr>
<tr>
<td>
The dataset will be designed to allow easy reuse with commonly available tools
and software libraries.
</td> </tr>
<tr>
<td>
**Repository where data will be stored**
</td> </tr>
<tr>
<td>
TBD
</td> </tr>
<tr>
<td>
**ARCHIVING AND PRESERVATION** (including storage and backup)
</td> </tr>
<tr>
<td>
**Data preservation period**
</td> </tr>
<tr>
<td>
The public part of this dataset will be preserved online for as long as there
are regular downloads. After that it would be made accessible by request.
</td> </tr>
<tr>
<td>
**Approximated end volume of data**
</td> </tr>
<tr>
<td>
TBD
</td> </tr>
<tr>
<td>
**Indicative associated costs for data archiving and preservation**
</td> </tr>
<tr>
<td>
TBD
</td> </tr>
<tr>
<td>
**Indicative plan for covering the above costs**
</td> </tr>
<tr>
<td>
The costs associated with the data archiving and preservation will be covered
by the CAPTAIN project budget.
</td> </tr>
<tr>
<td>
**PARTNERS ACTIVITIES AND RESPONSIBILITIES**
</td> </tr>
<tr>
<td>
**Partner Owner / Data Collector**
</td>
<td>
</td> </tr>
<tr>
<td>
**Partner in charge of the data analysis**
</td>
<td>
</td> </tr>
<tr>
<td>
**Partner in charge of the data storage**
</td>
<td>
</td> </tr>
<tr>
<td>
**WPs and Tasks**
</td> </tr>
<tr>
<td>
</td> </tr> </table>
5. DailyDiarySocial
<table>
<tr>
<th>
**DATA SET REFERENCE NAME**
</th>
<th>
**DailyDiarySocial**
</th> </tr>
<tr>
<td>
**DATA SET DESCRIPTION**
</td> </tr>
<tr>
<td>
**Generic description**
</td> </tr>
<tr>
<td>
This dataset will include the records created by users themselves, describing
their daily activities (Social Interactions) in plane text or in form of
questionnaire.
</td> </tr>
<tr>
<td>
**Origin of data**
</td> </tr>
<tr>
<td>
The dataset will be collected using diary style application available via
appliance User Interface.
</td> </tr>
<tr>
<td>
**Nature and scale of data**
</td> </tr>
<tr>
<td>
The datasets will be collected regularly (assuming daily) by using users
goodwill (could be several times a day) or predefined reminders set. Form to
fill will be presented allowing users to write down their daily social related
activities - phone calls, visits, Skype (or similar) chats, social networking
etc. The dataset will contain text and/or checkboxes with data associated.
_Data Format:_ TBD
</td> </tr>
<tr>
<td>
**To whom the dataset could be useful**
</td> </tr>
<tr>
<td>
The collected data will be used for the development, evaluation and correction
of the activities related services provided by the of the appliance. The
different parts of the dataset could be useful in the benchmarking of an
already provided services, focusing on efficiency of them.
</td> </tr>
<tr>
<td>
**Related scientific publication(s)**
</td> </tr>
<tr>
<td>
TBD
</td> </tr>
<tr>
<td>
**Indicative existing similar data sets** (including possibilities for
integration and reuse)
</td> </tr>
<tr>
<td>
TBD
</td> </tr>
<tr>
<td>
**SAMPLE DATA**
</td> </tr>
<tr>
<td>
**Subjects to be enrolled**
</td> </tr>
<tr>
<td>
TBD
</td> </tr> </table>
<table>
<tr>
<th>
</th> </tr>
<tr>
<td>
</td>
<td>
**Technology for Data Collection**
</td>
<td>
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
TBD
</td> </tr>
<tr>
<td>
**Protocol for Data Collection**
</td> </tr>
<tr>
<td>
TBD
</td> </tr>
<tr>
<td>
**Potential Risk Associated with Data Collection**
</td> </tr>
<tr>
<td>
TBD
</td> </tr>
<tr>
<td>
**STANDARDS AND METADATA**
</td> </tr>
<tr>
<td>
TBD
</td> </tr>
<tr>
<td>
**DATA SHARING***
</td> </tr>
<tr>
<td>
**Access type**
</td> </tr>
<tr>
<td>
The dataset could become publicly available after pseudonymisation. The
inclusion of a subject’s data in the public part of this dataset will be done
on the basis of appropriate informed consent to data publication.
</td> </tr>
<tr>
<td>
**Access Procedures**
</td> </tr>
<tr>
<td>
TBD
</td> </tr>
<tr>
<td>
**Embargo periods** (if any)
</td> </tr>
<tr>
<td>
The applicable datasets will be publicly available two years after the end of
the project to allow the consortium prepare and submit the scientific
publications.
</td> </tr>
<tr>
<td>
**Technical mechanisms for dissemination**
</td> </tr>
<tr>
<td>
TBD
</td> </tr>
<tr>
<td>
**Necessary S/W and other tools for enabling re-use**
</td> </tr>
<tr>
<td>
The dataset will be designed to allow easy reuse with commonly available tools
and software libraries.
</td> </tr>
<tr>
<td>
**Repository where data will be stored**
</td> </tr>
<tr>
<td>
TBD
</td> </tr>
<tr>
<td>
**ARCHIVING AND PRESERVATION** (including storage and backup)
</td> </tr>
<tr>
<td>
**Data preservation period**
</td> </tr>
<tr>
<td>
The public part of this dataset will be preserved online for as long as there
are regular downloads. After that it would be made accessible by request.
</td> </tr>
<tr>
<td>
**Approximated end volume of data**
</td> </tr>
<tr>
<td>
TBD
</td> </tr>
<tr>
<td>
**Indicative associated costs for data archiving and preservation**
</td> </tr>
<tr>
<td>
TBD
</td> </tr>
<tr>
<td>
**Indicative plan for covering the above costs**
</td> </tr>
<tr>
<td>
The costs associated with the data archiving and preservation will be covered
by the CAPTAIN project budget.
</td> </tr>
<tr>
<td>
**PARTNERS ACTIVITIES AND RESPONSIBILITIES**
</td> </tr>
<tr>
<td>
**Partner Owner / Data Collector**
</td>
<td>
</td> </tr>
<tr>
<td>
**Partner in charge of the data analysis**
</td>
<td>
</td> </tr>
<tr>
<td>
**Partner in charge of the data storage**
</td>
<td>
</td> </tr>
<tr>
<td>
**WPs and Tasks**
</td> </tr>
<tr>
<td>
</td> </tr> </table>
6. DailyDiaryPhysical
<table>
<tr>
<th>
**DATA SET REFERENCE NAME**
</th>
<th>
**DailyDiaryPhysical**
</th> </tr>
<tr>
<td>
**DATA SET DESCRIPTION**
</td> </tr>
<tr>
<td>
**Generic description**
</td> </tr>
<tr>
<td>
This dataset will include the records created by users themselves, describing
their daily activities (Physical Activities) in plane text or in form of
questionnaire.
</td> </tr>
<tr>
<td>
**Origin of data**
</td> </tr>
<tr>
<td>
The dataset will be collected using diary style application available via
appliance User Interface.
</td> </tr>
<tr>
<td>
**Nature and scale of data**
</td> </tr>
<tr>
<td>
The datasets will be collected regularly (assuming daily) by using users
goodwill (could be several times a day) or predefined reminders set. Form to
fill will be presented allowing users to write down their daily physical
activities - physical exercises, walking, gym etc.
The dataset will contain text and/or checkboxes with data associated.
_Data Format:_ TBD
</td> </tr>
<tr>
<td>
**To whom the dataset could be useful**
</td> </tr>
<tr>
<td>
The collected data will be used for the development, evaluation and correction
of the activities related services provided by the of the appliance. The
different parts of the dataset could be useful in the benchmarking of an
already provided services, focusing on efficiency of them.
</td> </tr>
<tr>
<td>
**Related scientific publication(s)**
</td> </tr>
<tr>
<td>
TBD
</td> </tr>
<tr>
<td>
**Indicative existing similar data sets** (including possibilities for
integration and reuse)
</td> </tr>
<tr>
<td>
TBD
</td> </tr>
<tr>
<td>
**SAMPLE DATA**
</td> </tr>
<tr>
<td>
**Subjects to be enrolled**
</td> </tr>
<tr>
<td>
TBD
</td> </tr> </table>
<table>
<tr>
<th>
</th> </tr>
<tr>
<td>
</td>
<td>
**Technology for Data Collection**
</td>
<td>
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
TBD
</td> </tr>
<tr>
<td>
**Protocol for Data Collection**
</td> </tr>
<tr>
<td>
TBD
</td> </tr>
<tr>
<td>
**Potential Risk Associated with Data Collection**
</td> </tr>
<tr>
<td>
TBD
</td> </tr>
<tr>
<td>
**STANDARDS AND METADATA**
</td> </tr>
<tr>
<td>
TBD
</td> </tr>
<tr>
<td>
**DATA SHARING***
</td> </tr>
<tr>
<td>
**Access type**
</td> </tr>
<tr>
<td>
The dataset could become publicly available after pseudonymisation. The
inclusion of a subject’s data in the public part of this dataset will be done
on the basis of appropriate informed consent to data publication.
</td> </tr>
<tr>
<td>
**Access Procedures**
</td> </tr>
<tr>
<td>
TBD
</td> </tr>
<tr>
<td>
**Embargo periods** (if any)
</td> </tr>
<tr>
<td>
The applicable datasets will be publicly available two years after the end of
the project to allow the consortium prepare and submit the scientific
publications.
</td> </tr>
<tr>
<td>
**Technical mechanisms for dissemination**
</td> </tr>
<tr>
<td>
TBD
</td> </tr>
<tr>
<td>
**Necessary S/W and other tools for enabling re-use**
</td> </tr>
<tr>
<td>
The dataset will be designed to allow easy reuse with commonly available tools
and software libraries.
</td> </tr>
<tr>
<td>
**Repository where data will be stored**
</td> </tr>
<tr>
<td>
TBD
</td> </tr>
<tr>
<td>
**ARCHIVING AND PRESERVATION** (including storage and backup)
</td> </tr>
<tr>
<td>
**Data preservation period**
</td> </tr>
<tr>
<td>
The public part of this dataset will be preserved online for as long as there
are regular downloads. After that it would be made accessible by request.
</td> </tr>
<tr>
<td>
**Approximated end volume of data**
</td> </tr>
<tr>
<td>
TBD
</td> </tr>
<tr>
<td>
**Indicative associated costs for data archiving and preservation**
</td> </tr>
<tr>
<td>
TBD
</td> </tr>
<tr>
<td>
**Indicative plan for covering the above costs**
</td> </tr>
<tr>
<td>
The costs associated with the data archiving and preservation will be covered
by the CAPTAIN project budget.
</td> </tr>
<tr>
<td>
**PARTNERS ACTIVITIES AND RESPONSIBILITIES**
</td> </tr>
<tr>
<td>
**Partner Owner / Data Collector**
</td>
<td>
</td> </tr>
<tr>
<td>
**Partner in charge of the data analysis**
</td>
<td>
</td> </tr>
<tr>
<td>
**Partner in charge of the data storage**
</td>
<td>
</td> </tr>
<tr>
<td>
**WPs and Tasks**
</td> </tr>
<tr>
<td>
</td> </tr> </table>
7. DailyDiaryADL
<table>
<tr>
<th>
**DATA SET REFERENCE NAME**
</th>
<th>
**DailyDiaryADL**
</th> </tr>
<tr>
<td>
**DATA SET DESCRIPTION**
</td> </tr>
<tr>
<td>
**Generic description**
</td> </tr>
<tr>
<td>
This dataset will include the records created by users themselves, describing
their daily activities related to potential risk of fall associated difficulty
with Activities of Daily Living (ADL) in plane text or in form of
questionnaire.
</td> </tr>
<tr>
<td>
**Origin of data**
</td> </tr>
<tr>
<td>
The dataset will be collected using diary style application available via
appliance User Interface.
</td> </tr>
<tr>
<td>
**Nature and scale of data**
</td> </tr>
<tr>
<td>
The datasets will be collected regularly (assuming daily) by using users
goodwill (could be several times a day) or predefined reminders set. Form to
fill will be presented allowing users to write down their notes about their
condition related to ADL and (potential) risk of fall. The dataset will
contain text and/or checkboxes with data associated.
_Data Format:_ TBD
</td> </tr>
<tr>
<td>
**To whom the dataset could be useful**
</td> </tr>
<tr>
<td>
The collected data will be used for the development, evaluation and correction
of the activities related services provided by the of the appliance. The
different parts of the dataset could be useful in the benchmarking of an
already provided services, focusing on efficiency of them.
</td> </tr>
<tr>
<td>
**Related scientific publication(s)**
</td> </tr>
<tr>
<td>
TBD
</td> </tr>
<tr>
<td>
**Indicative existing similar data sets** (including possibilities for
integration and reuse)
</td> </tr>
<tr>
<td>
TBD
</td> </tr>
<tr>
<td>
**SAMPLE DATA**
</td> </tr>
<tr>
<td>
**Subjects to be enrolled**
</td> </tr>
<tr>
<td>
TBD
</td> </tr> </table>
<table>
<tr>
<th>
</th> </tr>
<tr>
<td>
</td>
<td>
**Technology for Data Collection**
</td>
<td>
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
TBD
</td> </tr>
<tr>
<td>
**Protocol for Data Collection**
</td> </tr>
<tr>
<td>
TBD
</td> </tr>
<tr>
<td>
**Potential Risk Associated with Data Collection**
</td> </tr>
<tr>
<td>
TBD
</td> </tr>
<tr>
<td>
**STANDARDS AND METADATA**
</td> </tr>
<tr>
<td>
TBD
</td> </tr>
<tr>
<td>
**DATA SHARING***
</td> </tr>
<tr>
<td>
**Access type**
</td> </tr>
<tr>
<td>
The dataset could become publicly available after pseudonymisation. The
inclusion of a subject’s data in the public part of this dataset will be done
on the basis of appropriate informed consent to data publication.
</td> </tr>
<tr>
<td>
**Access Procedures**
</td> </tr>
<tr>
<td>
TBD
</td> </tr>
<tr>
<td>
**Embargo periods** (if any)
</td> </tr>
<tr>
<td>
The applicable datasets will be publicly available two years after the end of
the project to allow the consortium prepare and submit the scientific
publications.
</td> </tr>
<tr>
<td>
**Technical mechanisms for dissemination**
</td> </tr>
<tr>
<td>
TBD
</td> </tr>
<tr>
<td>
**Necessary S/W and other tools for enabling re-use**
</td> </tr>
<tr>
<td>
The dataset will be designed to allow easy reuse with commonly available tools
and software libraries.
</td> </tr>
<tr>
<td>
**Repository where data will be stored**
</td> </tr>
<tr>
<td>
TBD
</td> </tr>
<tr>
<td>
**ARCHIVING AND PRESERVATION** (including storage and backup)
</td> </tr>
<tr>
<td>
**Data preservation period**
</td> </tr>
<tr>
<td>
The public part of this dataset will be preserved online for as long as there
are regular downloads. After that it would be made accessible by request.
</td> </tr>
<tr>
<td>
**Approximated end volume of data**
</td> </tr>
<tr>
<td>
TBD
</td> </tr>
<tr>
<td>
**Indicative associated costs for data archiving and preservation**
</td> </tr>
<tr>
<td>
TBD
</td> </tr>
<tr>
<td>
**Indicative plan for covering the above costs**
</td> </tr>
<tr>
<td>
The costs associated with the data archiving and preservation will be covered
by the CAPTAIN project budget.
</td> </tr>
<tr>
<td>
**PARTNERS ACTIVITIES AND RESPONSIBILITIES**
</td> </tr>
<tr>
<td>
**Partner Owner / Data Collector**
</td>
<td>
</td> </tr>
<tr>
<td>
**Partner in charge of the data analysis**
</td>
<td>
</td> </tr>
<tr>
<td>
**Partner in charge of the data storage**
</td>
<td>
</td> </tr>
<tr>
<td>
**WPs and Tasks**
</td> </tr>
<tr>
<td>
</td> </tr> </table>
8. TheCAPTAINCoach
<table>
<tr>
<th>
**DATA SET REFERENCE NAME**
</th>
<th>
**TheCAPTAINCoach**
</th> </tr>
<tr>
<td>
**DATA SET DESCRIPTION**
</td> </tr>
<tr>
<td>
**Generic description**
</td> </tr>
<tr>
<td>
Internal data generated by CAPTAIN sensors during appliance everyday
functionality will be cleaned, pre-processed, anonymized (or at least
pseudonymized), summarized and then delivered to the cloud platform, where
this dataset will reside.
The resulting data set will contain two kinds of information:
● Longitudinal data from users covering high level features such as emotional
states, activities of daily living, travel and gait patterns, as well as
cognitive/physical training results (T4.5) ● Personalized guides and
recommendations (T5.1-T5.3)
</td> </tr>
<tr>
<td>
**Origin of data**
</td> </tr>
<tr>
<td>
The input data will be the distilled combination of the outputs from **WP4
through T4.5** **and T5.1 - T5.3** .
</td> </tr>
<tr>
<td>
**Nature and scale of data**
</td> </tr>
<tr>
<td>
All output data from systems in **WP4** (sensing) through T4.5:
* User presence information
* Emotional, behavioural and contextual activity information (sensitive data)
* Indoor location and gait analysis
* Physical and cognitive training progress monitoring (sensitive data) Coaching information:
* Guides and recommendations for interventions
* User model instances ( **coming from T5.1 & T.5.3 ** ) for motivational engine & HCI.
_Data Format:_
Expected volume may require Big Data management tools, and all these data will
be mainly numeric and text. In order to structure this data JSON file format
is foreseen unless for simplicity a tabular representation such as CSV file is
enough. Each data point must include the user ID and the timestamp.
</td> </tr> </table>
<table>
<tr>
<th>
If linkage among events is required, this will be also stored. Data will
reflect a longitudinal dimension across a broad spectrum of elderly people
with different characteristics, background and interests.
This data should be served through the CAPTAIN cloud system through a clean
REST API, including authentication methods.
A support backup could be sent digitally to the researcher profiles, as a SQL
database export. In this case, the expected size could be in the range of MB
to several GB per user.
The annotations should include ground truth (validated by experts) about the
evolution of each person, as well as the category to which each subject
belongs (each time a person moves from one category to another needs also to
be annotated)
</th> </tr>
<tr>
<td>
**To whom the dataset could be useful**
</td> </tr>
<tr>
<td>
To those interested in developing and evaluating virtual coaching algorithms.
In particular, the anonymized part of the collected data could be useful for
researchers, while the pseudonymized part of the dataset will be useful for
Vicomtech in order to implement the CAPTAIN coach. Additional CAPTAIN services
may be created based on the anonymized part of this dataset.
</td> </tr>
<tr>
<td>
**Related scientific publication(s)**
</td> </tr>
<tr>
<td>
TBD
</td> </tr>
<tr>
<td>
**Indicative existing similar data sets** (including possibilities for
integration and reuse)
</td> </tr>
<tr>
<td>
N/A
</td> </tr>
<tr>
<td>
**SAMPLE DATA**
</td> </tr>
<tr>
<td>
**Subjects to be enrolled**
</td> </tr>
<tr>
<td>
Broad spectrum of subjects (age, gender, health profile, etc) are required:
* People with major cognitive/physical impairment should be excluded.
* 50 could be a reasonable amount of subjects
* Enrolment must cover recording for more than 6 months per subject
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
**Technology for Data Collection**
</td> </tr>
<tr>
<td>
Data will be collected in two parallel ways: a) using a lifelogging smartphone
app, containing a set of questionnaires which will be filled by either the
data subjects or their nominated proxy everyday, and b) by using the modules
developed in WP4 as well as the implementations of tasks 5.1 to 5.3. Since WP4
modules and T5.1-5.3 implementation is part of the project, they will not be
available for the first pilots when data will be collected, therefore, in
those cases only lifelogging based surveys will be collected and available.
</td> </tr>
<tr>
<td>
**Protocol for Data Collection**
</td> </tr>
<tr>
<td>
A combination of data self-reporting (lifelogging), questionnaires and
structured interviews is foreseen
* If possible, at least for the pilots when the outputs from WP4, T51-T5.3 tasks should be available, data captured from those outputs should be included into this dataset
* Task 4.5 leader, DIGIFLAK should provide this dataset
* Collected data should cover a period of at least 6 months
</td> </tr> </table>
<table>
<tr>
<th>
* Depending on the final data reporting process, it could take more than 2 hours/day
* Seasonal analysis may be of interest
The protocol will be as follows:
1. Per each pilot first it will be decided if WP4 and or T5.1-5.3 modules are available for capturing this kind of data. If so, they will be installed and enabled to capture data
2. Lifelogging smartphone app will be made available to data subjects, including explanation to them (and the nominated proxy if exists)
3. Data subjects will perform the activities as described in the pilot
4. Data subjects (or nominated proxies) will carry out the lifelogging process
5. Data will be processed to the CAPTAIN cloud system and made available during the pilot, with a delay of at most 1 week. During this process, data will be only pseudonymized and only available to CAPTAIN members
6. After finishing the pilot (except for data subjects which may be enrolled for future pilots), data will be anonymized by following this procedure:
1. Create a new person ID for each original person ID, which must not be directly obtainable from the original
2. Replace all references to original ID by the new ones, obtaining a new dataset D'
3. Delete any table with the correspondence between the original and new ID d. Send WP5 partners D'
</th> </tr>
<tr>
<td>
**Potential Risk Associated with Data Collection**
</td> </tr>
<tr>
<td>
**Risk**
</td>
<td>
**Level**
</td>
<td>
**Probability**
</td> </tr>
<tr>
<td>
Safety in performing activity
</td>
<td>
High
</td>
<td>
Low
</td> </tr>
<tr>
<td>
Using technologies
</td>
<td>
High
</td>
<td>
Medium
</td> </tr>
<tr>
<td>
Psychological implications
</td>
<td>
Medium
</td>
<td>
Low
</td> </tr>
<tr>
<td>
**Access type**
</td> </tr>
<tr>
<td>
For the pseudonymized part of the dataset user and password-based control will
be implemented as well as a registry of the access to the data. Access control
will be restricted to CAPTAIN project members, and specifically to WP5 and
task 4.5 participants.
For the anonymized part of the dataset, access will also be controlled by
password, but access may be granted upon request. Access will also be
registered.
In both cases read only access will be provided
</td> </tr>
<tr>
<td>
**Access Procedures**
</td> </tr>
<tr>
<td>
This dataset will be accessed through a REST API provided by the CAPTAIN cloud
system. Additionally, SQL backups are foreseen.
</td> </tr>
<tr>
<td>
**Embargo periods** (if any)
</td> </tr>
<tr>
<td>
The applicable datasets will be publicly available two years after the end of
the project to allow the consortium prepare and submit the scientific
publications.
</td> </tr>
<tr>
<td>
**Technical mechanisms for dissemination**
</td> </tr>
<tr>
<td>
For the public part of the dataset, a link to this will be provided from the
Data management portal. The link will be provided in all relevant CAPTAIN
publications. A technical publication describing the dataset and acquisition
procedure will be published.
</td> </tr>
<tr>
<td>
**Necessary S/W and other tools for enabling re-use**
</td> </tr>
<tr>
<td>
The dataset will be designed to allow easy reuse with commonly available tools
and software libraries.
</td> </tr>
<tr>
<td>
**Repository where data will be stored**
</td> </tr>
<tr>
<td>
This dataset will be accommodated at the data management portal of the project
website.
</td> </tr>
<tr>
<td>
**ARCHIVING AND PRESERVATION** (including storage and backup)
</td> </tr>
<tr>
<td>
**Data preservation period**
</td> </tr>
<tr>
<td>
This dataset will be preserved online for as long as there are regular
downloads. After that it would be made accessible by request. The private part
of the dataset will be preserved by DIGIFLAK for the period required by GDPR.
</td> </tr>
<tr>
<td>
**Approximated end volume of data**
</td> </tr>
<tr>
<td>
~GBs
</td> </tr>
<tr>
<td>
**Indicative associated costs for data archiving and preservation**
</td> </tr>
<tr>
<td>
Probably two dedicated hard disk drives will be allocated for the dataset; one
for the public part and one for the private. In this case the costs associated
with its preservation will be according to the hardware cost (hard drives).
</td> </tr>
<tr>
<td>
**Indicative plan for covering the above costs**
</td> </tr>
<tr>
<td>
The costs associated with the data archiving and preservation will be covered
by the CAPTAIN project budget.
</td> </tr>
<tr>
<td>
**PARTNERS ACTIVITIES AND RESPONSIBILITIES**
</td> </tr>
<tr>
<td>
**Partner Owner / Data Collector**
</td>
<td>
**DIGIFLAK**
</td> </tr>
<tr>
<td>
**Partner in charge of the data analysis**
</td>
<td>
**VICOMTECH**
</td> </tr>
<tr>
<td>
**Partner in charge of the data storage**
</td>
<td>
**DIGIFLAK**
</td> </tr>
<tr>
<td>
**WPs and Tasks**
</td> </tr>
<tr>
<td>
The data is going to be collected in WP4, through T4.5 and then used in T5.4
</td> </tr> </table>
1. Data Protection
For the public part of the Pseudo-Opened Data Sets, a link will be provided
from the Data management portal. At the same time specific processes will be
defined for related to specified Data Subjects data sets fetching, making
possible identification all data related to dedicated Data Subject and
allowing Data Subject rights execution with all corresponding data sets.
# DATA SUBJECTS RIGHTS
4.1.1 THE RIGHT TO BE INFORMED
Data Subject will be informed within predefined timeframe if data sets
corresponding to him/her will be processed or there is a request to provide an
access to Semi-Secured data, related to Data Subject. CAPTAIN will provide
Data Subjects with information including: CAPTAIN proposals for processing
their personal data, CAPTAIN retention periods for that personal data, and who
it will be shared with.
4.2 THE RIGHT OF ACCESS
Specific mechanism via Data Management Portal will be provided for Data
Subject to access their personal data and supplementary information, allowing
them allows to be aware of and verify the lawfulness of the processing.
4.3 THE RIGHT TO RECTIFICATION
Data Subject either directly via Data Management portal or indirectly via
CAPTAIN administrator will have an ability to correct data sets, when personal
data are found to be inaccurate.
4.4 THE RIGHT TO ERASURE
With additional stipulations, Data Subject will have an ability to issue a
request for all data erasure and, within predefined but non-later that within
40 days after request, CAPTAIN administrator will have an ability to delete
all corresponding to Data Subject data sets.
4.5 THE RIGHT TO RESTRICT PROCESSING
With additional stipulations, Data Subject will have an ability to limit the
processing of his/her personal data sets, with several rules and exceptions
defined during consent processing.
4.6 THE RIGHT TO DATA PORTABILITY
Data Subject will be informed within predefined timeframe if data sets
corresponding to him/her is going to be transferred to outside of current Data
Centre.
4.7 THE RIGHT TO OBJECT
Data Subjects at any time could say they don’t want the personal data
processing to be done or going on. Then within predefined timeframe all data
sets corresponding to him/her is going to be set on hold and excluded from
processing till consent on processing is given back.
4.8 THE RIGHT REGARDING AUTOMATED DECISION-MAKING
CAPTAIN will not base a decision solely on automated means, including
profiling, which produces legal or similar effects. All activities regarding
Data Objects data sets will get clear and undoubtful consent from him/her.
4.9 OTHER RIGHTS
Specified by DMO Data Policies will define the steps required for executing
other Data Subjects rights like Consent withdraw, data breach notification and
compensation.
# IN CASE OF EMERGENCY – DATA BREACHES
In case of unluckily event of data breach the following process should be used
(Figure 1):
Figure 1: Escalation Process
# DATA MANAGEMENT OFFICIAL
Data Management Official, appointed by CAPTAIN consortium, will provide both
the permanent security and privacy audit of all system’s components and data
storage, validating risks associated with data breach as well as potential
threats. The audit will cover all system components including hardware
appliances, servers, communication channels, applications, operational
systems, access rights, etc. The audit will be extended after CAPTAIN
completion, if required, monitoring appliances installed at Data Subject
households, as well as collected data storages and data processing.
CAPTAIN Data Management Official will be the contact point for the supervisory
authority on issues relating to processing of data sets as well as any issues
related to Data Subjects and their rights execution.
Each Living Lab or pilot partner, in cases where collected data is not stored
in CAPTAIN cloud but in local storage, should nominate a person with a similar
role and responsibility to the Data Management Official. CAPTAIN Data
Management Official should be in permanent contact with corresponding
personnel at Living Labs/Pilot Partners, validating risks associated with data
breaches as well as potential threats.
# ETHICS INCLUDING CONSENT
CAPTAIN consortium policies about Ethics and Safety are outlined in D1.2
Ethics and Safety Manual.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1346_SENSE_769967.md
|
EXECUTIVE SUMMARY
SENSE’s _(Accelerating the Path towards Physical Internet)_ project aims at
accelerating the path towards the Physical Internet (PI), so advanced pilot
implementations of the PI concept are well functioning and extended in
industry practice by 2030, and hence contributing to at least 30 % reduction
in congestion, emissions and energy consumption. This change of paradigm is
supported by the European Technology Platform 1 ALICE, Alliance for
Logistics Innovation through Collaboration in Europe.
This SENSE deliverable _“D4.3 Data management Plan 1st Version”_ describes the
plan for publishing collected data and composed deliverables within the SENSE
project.
The plan includes:
1. making data Findable, Accessible, Interoperable and Reusable (FAIR);
2. allocation of resources related to the data collection and management;
3. data security aspects.
This document is a living document and will evolve throughout the SENSE
project. Next version(s) will become available in case there are significant
changes to the plan. If not, a final release is planned at the end of the
SENSE project to focus on the continuation of the data publication after the
project.
# Introduction
This document is the SENSE project Deliverable D4.3 _“Data Management Plan 1st
version”_ and it refers to Task 4.1 _Dissemination, Communication and
Stakeholder Engagement Plan & Exploitation Strategy (M1-M30) _ coordinated by
ALICE. This task includes two main activities: i) develop the Dissemination,
Communication and Stakeholders engagement plan and ii) define the exploitation
strategy to further develop and maximize the expected impacts after the
finalization of the project in a sustainable way. This deliverable is focussed
on point ii) above and is a report on that activity.
According to the description of Task 4.1, the scope of this document is as
follows:
SENSE will take part in the “Pilot on Open Research Data”. A Data Management
Plan (DMP) is to be delivered in M6 in which what kind of data will be open
will be specified i.e. detailing what data the project will generate, whether
and how it will be exploited or made accessible for verification and re-use,
and how it will be curated and preserved. The data Management Plan will evolve
during the lifetime of the project in order to present the status of the
project's reflections on data management. This plan will follow the template
for such available in the H2020 Online Manual on the Participant Portal.
The SENSE project will generate a limited amount of new data. New data is
mainly linked to the naming of the categories for the areas to be used for the
Physical Internet roadmap (WP2).
Existing data about projects, companies and funding opportunities, relevant to
the Physical Internet concept will be collected, filtered and stored in the
deliverables and the Physical Internet (PI) Knowledge Platform (KP) (D3.2 and
D3.5 of the SENSE project). The purpose of the collection of this data is to
identify and map out the existing activities in the field of the Physical
Internet initiative and make it publicly available for all stakeholders in the
field of logistics innovation.
The source for the collection of existing data will be publicly available
databases such as Crunchbase, AngelList (start-up companies), CORDIS (projects
and funding opportunities). Different keywords individually and/or in
combinations such as logistics, robotics, automation, etc. will be used to
extract relevant data from these public databases. The collected data will be
first stored to spreadsheets (MS Excel) and then attached to the project
deliverables as Annexes, as well as uploaded to the Physical Internet KP
(under documentation). All deliverables will be uploaded to the OpenAIRE
database and made publicly available.
The most relevant Physical Internet projects, companies and funding
opportunities will be selected by the project partners and individual entries
will be created to the Physical Internet KP. Each entered project and company
entry will provide a detailed information such as:
* short description
* link to the website
* key figures (e.g. company foundation year, no of employees, funding raised, etc.) • tags (predefined or custom)
* etc.
The following chapters will describe how the SENSE project will make data
Findable, Accessible, Interoperable and Reusable (FAIR), as well as allocation
of resources and data security.
# FAIR data
## Making data findable
The data collected in the SENSE project will be made publicly findable on the
internet by uploading the deliverables to the OpenAIRE repository and creating
entries to the Physical Internet KP.
OpenAIRE is a technical infrastructure harvesting research output from
connected data providers. OpenAIRE aims to establish an open and sustainable
scholarly communication infrastructure responsible for the overall management,
analysis, manipulation, provision, monitoring and cross-linking of all
research outcomes 2 . For each uploaded deliverable, a summary of the report
will be added, making sure it includes all desired keywords for the search
engines.
The Physical Internet KP will be accessible via the ALICE webpage. The
objective is to valorise the findings and information collected and analysed
(i.e. Physical Internet companies, R&I projects and funding programs) in the
rest of the tasks within WP3 and making it publicly available to the entire
Physical Internet Community. The platform will enhance the use of information
as it will have easy access and an advanced search tool.
To enhance the search tool of KP a tagging mechanism will be developed. This
includes pre-defined tags as well as custom tags which can be identified in
the moment of data entry. Tags will be grouped and are following two main
themes: ALICE Working Groups/Roadmaps and Physical Internet areas identified
in the Physical Internet roadmaps (WP2).
Tags to be identified for ALICE Working Groups/Roadmaps:
* Sustainable, Safe and Secure Supply Chains (focussed on the Roadmap towards Zero Emmissions
Logistics in 2050)
* Corridors, Hubs and Synchromodality
* Information Systems for Interconnected Logistics
* Global Supply Network Coordination and Collaboration
* Urban Logistics
Tags to be identified for Physical Internet areas:
* System level functionality
* Access and Adoption
* Governance
* PI nodes
* PI network services
This is a preliminary set of pre-defined tags to become available in KP. More
detailed set will follow based on the outcome of the work in WP2.
Moreover, specific cases will be tagged as best practices in KP to increase
the visibility of these specific use cases to external stakeholders.
## Making data openly accessible
A limited amount of new data will be produced in the SENSE project. Existing
data will be collected from public sources and will be made openly available
to everyone.
All deliverables published in the OpenAIRE repository will be open and
downloadable. Data published in the Physical Internet KP will require a
registration by an interested person. After the registration basic information
will become visible to the registered user. A detailed information of a
project, company or funding opportunity can be seen only when subscribing to
the entry.
To register as a user of KP basic information will be required, i.e. e-mail,
first name, last name and country. After a submission of the registration
form, the user will confirm the account over the link sent via e-mail and can
start using KP right after that. If a registered user wants to enhance their
profile, they will need to fill in the detailed profile information to become
more visible for the other platform users. This will be in their benefit in
case they are looking for business collaboration opportunities for example.
The Physical Internet KP will be built on the Open Source software Moodle 3
and customised to accommodate social networking between the community members,
and user management functionality. The source code and website databases will
be stored in a public server (i.e. Amazon Web Services or similar).
## Making data interoperable
Data collected and published within the SENSE project will be using standard
vocabulary in the field of logistics research and thus it will make the data
easily findable for all stakeholders. Moreover, all data to be published in
the Physical Internet KP is following the standard structure of the ALICE
Working Groups and the Physical Internet areas to be defined in the Physical
Internet Roadmap (using tags). This will ensure that data will be easily found
by the community who is familiar with the Physical Internet initiative and
ALICE organisation.
Data related to projects and companies and collected to spreadsheets can be
downloaded either from the annexes of the deliverables or from KP. Spreadsheet
tables will follow structures which allow easy manipulation of the collected
data (filtering, sorting, etc.).
## Increase data re-use
All data collected and published in the SENSE project is originating from open
sources and thus will be free of charge and openly accessible after
republishing within the SENSE project. Only a simple user registration is
required to use the Physical Internet KP.
Data becomes available immediately at the moment of uploading the deliverable
to the OpenAIRE portal or to the Physical Internet KP.
Although the establishment of KP is supported by the European Commission under
the H2020 program, KP will be developed with the vision to keep it up and
running also after the end of the SENSE project. Supported and maintained by
ETP ALICE, see below.
Thus, there is no timeline established to keep the collected data published.
Furthermore, the aim is to improve the Physical Internet KP over time and
publish more data as the Physical Internet initiative evolves globally. The
social networking element of KP will support this vision and the final goal is
to have the Physical Internet KP as a global repository for the Physical
Internet Initiative activities.
# Allocation of resources
All costs linked to data collection in the SENSE project are financed by the
European Commission under H2020 program (mainly WP3). The same applies to the
customisation works of the KP source code. ALICE is a lead partner in Task 4.1
and the project coordinator of SENSE and thus responsible for any related data
publication in OpenAIRE and the Physical Internet KP.
After the SENSE project, data collection and entry to the Physical Internet KP
will continue. A detailed plan will be developed and specified for that
purpose. However, most probably this task will remain at the ALICE Working
Groups level and will become a part of their activities. Thus, all direct and
indirect costs linked to the data collection and entry after the SENSE project
will be covered by ALICE members.
# Data security
The security of data published in OpenAIRE is under their responsibility. The
repository is using SSL certification to secure the connection to their
website.
The security of data published in the Physical Internet KP will be the
responsibility of the website server and database provider. SSL certificates
will be used for the secure connection to the website.
To avoid user registration in the Physical Internet KP by robots, a
registration process must be completed by the user over the link sent by
e-mail. This will help minimizing the risk of fake user accounts and will
ensure the quality of the user database and data entries.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1347_DRIVEMODE_769989.md
|
# Introduction
The purpose of the Data Management Plan (DMP) is to provide an analysis of the
main elements of the data management policy that will be used by the
Consortium with regard to the project research data.
The DMP covers the complete research data life cycle. It describes the types
of research data that will be generated or collected during the project, the
standards that will be used, how the research data will be preserved and what
parts of the datasets will be shared for verification or reuse. It also
reflects the current state of the Consortium agreements on data management and
must be consistent with exploitation and intellectual property rights (IPR)
requirements.
Research data form the basis of the DRIVEMODE project. They play a crucial
role and should be effectively managed to ensure the verification and reuse of
research results, and the sustainable storage of the dataset.
This Data Management Plan aims at providing a timely insight into facilities
and expertise necessary for data management both during and after the
DRIVEMODE research, to be used by all DRIVEMODE researchers and their
environment, including: DRIVEMODE’s Project Coordination Committee, Technical
Management Team, work packages (WPs), task leaders, research funders, and
research users.
The most important reasons for setting up this Data Management plan are:
* Embedding the DRIVEMODE project in the EU policy on data management, which is increasingly geared towards providing open access to data that are gathered with funds from the EU;
* Enabling verification of the research results of the DRIVEMODE project;
* Stimulating the reuse of DRIVEMODE data by other researchers;
* Enabling the sustainable and secure storage of DRIVEMODE data in the data repositories;
* Helping to streamline the research process from start to finish. The data management plan clarifies in advance the required data expertise and facilities to store data.
Open access is defined as the practice of providing on-line access to
scientific information that is free of charge to the reader and that is
reusable. In the context of research and innovation, scientific information
can refer to peer-reviewed scientific research articles or research data.
Research data refers to information, in particular facts or numbers, collected
to be examined and considered and as a basis for reasoning, discussion, or
calculation. In a research context, examples of data include statistics,
results of experiments, measurements, observations resulting from fieldwork,
survey results, interview recordings, computational results and images. The
focus is on research data that is available in digital form.
The Consortium strongly believes in the concepts of open science, and in the
benefits that the European innovation ecosystem and economy can draw from
allowing the reuse of data at a larger scale.
Data sharing in the open domain can be restricted as a legitimate reason to
protect results that can reasonably be expected to be commercially or
industrially exploited. Strategies to limit such restrictions will include
anonymizing or aggregating data, agreeing on a limited embargo period or
publishing selected datasets.
# Data Summary
The project develops a new compact powertrain traction module with an optimal
trade-off between efficiency, manufacturability, and cost. Therefore, the
collected data are related to requirement specification based on the vehicle
market. This information brings the ground for concept selection and top-level
performance criteria definition. The generated data provide performance
evaluation for each particular component and drivetrain as a whole. In that
way, the achieved performance is validated against the specified one.
## Research Data Types and Formats
The project will produce model data, design data, measurement data, simulation
models, and algorithms. Some data will be collected as tables and stored in
CSV format along with corresponding statistical results. The estimated types
and quantities are the following:
* Model data, geometry models in common CAD format (e.g. STEP, IGES), estimated size is tens of MB.
* Measurement data from components and drivetrain, estimated size is few MB.
* Simulation models in Matlab, Excel, CAD tools, FE tool and other software packages estimated size is 1 GB.
* Qualitative data from the interviews and discussions with the designers, estimated size is few MB (structured forms, transcripted data, evaluation matrices).
It is expected to produce data sets according to Table 1, which corresponds to
the project structure.
Table 1 Types of datasets in DRIVEMODE project
<table>
<tr>
<th>
**#**
</th>
<th>
**Dataset**
</th>
<th>
**Lead partner**
</th>
<th>
**Related WPs**
</th> </tr>
<tr>
<td>
1
</td>
<td>
Passenger cars and light duty vehicles statistical data
</td>
<td>
Chalmers
</td>
<td>
WP2
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
2 Transmission test and simulation data
</td>
<td>
BorgWarner
</td>
<td>
WP6
</td> </tr>
<tr>
<td>
3
</td>
<td>
Electrical motor test and simulation data
</td>
<td>
AVL
</td>
<td>
WP3
</td> </tr>
<tr>
<td>
4 Inverter test and simulation data
</td>
<td>
Danfoss
</td>
<td>
WP4
</td> </tr>
<tr>
<td>
5
</td>
<td>
Drivetrain test and simulation data
</td>
<td>
Danfoss
</td>
<td>
WP2,WP3,WP4,WP6
</td> </tr>
<tr>
<td>
6 Cooling test and simulation data
</td>
<td>
SCIRE
</td>
<td>
WP5
</td> </tr>
<tr>
<td>
7
</td>
<td>
Vehicle test data
</td>
<td>
NEVS
</td>
<td>
WP7
</td> </tr> </table>
Requirements, concepts evaluation and architecture
8 VTT WP2
decision tree
The initial set of data to define specification is a collection of available
data on the market vehicle types and their properties. This information is
obtained from the open sources and technical documentation from manufacturers.
It is complemented by statistical analysis and clustering to specific groups
according to which the performance specifications are defined. This type of
data is stored in form of a table and then preserved in CSV format with
accompanying metadata.
The research project will produce different quantitative measurement and
simulation data. The results collected and then stored in the format
convenient and traditionally used for the specific engineering area.
Measurement data collected by data loggers and human observations will be
stored in suitable ASCII format. The simulation data includes geometry files
and files describing the physical model. Whenever possible the open source and
vendor independent formats such as STEP, IGES will be used. Further,
computation results will be stored in suitable text or binary documents.
Specific datasets may be associated to scientific publications (i.e.
underlying data), public project reports and other raw data or curated data
not directly attributable to a publication. The policy for open access are
summarized in Figure 2.
Research Results
Research Data
Linked to
Exploitation
?
No Open
Access
Yes
Deposit Data
No
Linked to
Publication
?
Gold Open
Access
Green Open
Access
Yes
Published on
publication date
Published on
publication date
\+
6
Months
No
Published before
project end date
Figure 2 Research data options and publishing times
The open access is provided for the data that are not linked to exploitation.
This data are stored to the selected repository and the open publication date
is decided. Decision depends whether data belongs to the publication. In that
case the date is defined by the type of the publication access model. When
data are not linked to the publication it is openly published in the
repository before the official end of the project.
# FAIR Data
To ensure that data are findable the final data sets are stored on ZENODO
repository 2 , which is open repository, does not have restrictions on
research area, and has flexible licensing. Identifiers like digital object
identifier (DOI) will be assigned to create persistent links. The versioning
is ensured by functionality of the selected repository. As an alternative
solution, the university partners can store data on their long-term
preservation repositories under open access.
The research data will be provided with keywords description and corresponding
metadata. As there is no predefined standard the selection of format and
vocabulary will vary on the domain of expertise. However, to ensure
consistency at least the minimum set compliant with DataCite's Metadata Schema
3 will be utilized.
The data will be documented comprehensively, to ensure usability and
interpretation of the data also in the future. The metadata accompanying the
stored raw data will be saved in proper metadata standard format, describing
all stored variables. Information about software and its version number used
to obtain data will be included. The preference for open source tools will be
given.
The data produced during the project and not related to exploitation will be
shared, at least by the end of the project. The data are planned to be shared
as soon as it is practically reasonable. The data are shared with Creative
Commons Attribution 4.0 International (CC BY 4.0) license 4 (or other
relevant license), and persistent identifiers (PIDs) will be used to make the
access to the data easier.
# Allocation of Resources
Each DRIVEMODE partner has to respect the policies set out in this DMP. Data
sets have to be created, managed and stored appropriately and in line with
applicable legislation.
The Project Coordinator has a particular responsibility to ensure that data
shared through the DRIVEMODE website are easily available, but also that
backups are performed and that proprietary data are secured.
VTT, as the coordination partner, will ensure dataset integrity and
compatibility for its use during the project lifetime by different partners.
Validation and registration of data sets and metadata is the responsibility of
the partner that generates the data in the WP. Metadata constitutes an
underlying definition or description of the datasets, and facilitate finding
and working with particular instances of data.
Backing up data for sharing through open access repositories is the
responsibility of the partner possessing the data.
Quality control of these data is the responsibility of the relevant WP leader,
supported by the Project Coordinator.
# Data Security
In everyday work, each consortium partner is responsible for maintaining their
part of research data in safe environment. It includes storing the data in
media that support regular back up and secured access. This should ensure
safety and integrity of research results during project lifecycle.
For data exchange inside the DRIVEMODE consortium the common workspace is
used. It is based on Nextcloud environment and ensures versioning of data
files and secure back up in the cloud storage.
The DRIVEMODE project database will be designed to remain operational for 5
years after the project end. By the end of the project, the final dataset will
be transferred to the ZENODO repository for long term preservation, which
ensures sustainable archiving of the final research data. Items deposited in
ZENODO will be retained for the lifetime of the repository, which is currently
the lifetime of the host laboratory CERN and has an experimental program
defined for at least the next 20 years.
Data files and metadata are backed up on a regular basis, as well as
replicated in multiple copies in the online system. All data files are stored
along with a MD5 checksum of the file content.
Regular checks of files against their checksums are made.
# Ethical aspects
The DRIVEMODE partners are to comply with the ethical principles as set out in
Article 34 of the Grant Agreement, which states that all activities must be
carried out in compliance with:
The ethical principles (including the highest standards of research
integrity e.g. as set out in the European Code of Conduct for Research
Integrity, and including, in particular, avoiding fabrication, falsification,
plagiarism or other research misconduct) and Commission recommendation (EC) No
251/2005 of 11 March 2005 on the European Charter for Researchers and on a
Code of Conduct for the Recruitment of Researchers (OJ L 75, 22.03.2005, p.
67), the European Code of Conduct for Research Integrity of ALLEA (All
European Academies) and ESF (European Science Foundation) of March
2011\. (http://archives.esf.org/coordinating-research/mo-fora/research-
integrity.html) Applicable international, EU and national law.
Furthermore, activities raising ethical issues must comply with the ‘ethics
requirements’ set out in Annex 1 of the Grant Agreement.
## Confidentiality
All DRIVEMODE partners must keep any data, documents or other material
confidential during the implementation for the project and for four years
after the period set out in Article 3; as per Article 36 of the Grant
Agreement. Further detail on confidentiality can be found in Article 36 of the
Grant Agreement.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1349_LeMO_770038.md
|
# Introduction
## Abstract
This deliverable presents the LeMO data management plan. The deliverable
outlines how the research data collected or generated will be handled during
and after the LeMO project, describes which standards and methodology for data
collection and generation will be followed, and whether and how data will be
shared. This document follows the template provided by the European Commission
in the Participant Portal 1 . The Data Management Plan will be a live
document throughout the project. This initial version will evolve during the
project according to the progress of project activities.
## Purpose of the document
Data Management Plan is a document describing how research data from a project
are to be managed, from project start to finish. Over the course of its three
years, LeMO will produce many results to further deployment.
The Dissemination Plan confers the means of promoting those results to the
research community. The main elements of that plan are open access to
scientific publications of LeMO action. To increase the adoption of LeMO’s
results it is useful to support their reliability. One way to achieve this is
to enable the validation of the results by other researchers. This is possible
only by making available any data that accompany the technical results.
LeMO is participating in the Horizon 2020 Pilot on Open Research Data as a
formal way of making its research data available. This report complements
LeMO’s Dissemination Plan by putting forward a plan for managing the research
data. More precisely, it describes what data could be shared, any associated
metadata, how the data will be made available, and how they will be preserved.
## Target audience
The target audience for this deliverable is:
* Partners and Advisory & Reference Group in the LeMO project
* European Commission
* EU Parliament
* Horizon 2020 projects and related transport projects (cfr. clustering activities) Organisations and experts involved in the LeMO case studies.
* Transport organisations both public and private
# Data Management in Horizon 2020
According to the European Commission (EC) all project proposals submitted to
"Research and Innovation actions", "Innovation actions" and “Coordination
support actions” have to include a section on research data management which
is evaluated under the criterion 'Impact'. Projects participating in the pilot
action on open access to research data have to develop a data management plan
(DMP) to specify what data will be open. 2 The DMP is **defined as** :
_“Data Management Plans (DMPs) are a key element of good data management. A
DMP describes the data management life cycle for the data to be collected,
processed and/or generated by a Horizon 2020 project. The use of a Data
Management Plan is required for projects participating in the Open Research
Data Pilot. Other projects are invited to submit a Data Management Plan if
relevant for their planned research.”_
The **purpose of a DMP** is to provide an analysis of the main elements of the
data management policy that will be used by the applicants with regard to all
the datasets that will be generated by the project.
### Table 1 Clarification of terms
<table>
<tr>
<th>
**Research data**
</th>
<th>
Research data is the evidence that underpins all research conclusions (except
those which are purely theoretical) and includes data that have been
collected, observed, generated, created or obtained from commercial,
government or other sources, for subsequent analysis and synthesis to produce
original research results. These results are then used to produce research
papers and submitted for publication.
</th> </tr>
<tr>
<td>
**Open research data**
</td>
<td>
Openly accessible research data can typically be accessed, mined, exploited,
reproduced and disseminated, free of charge for the user.
</td> </tr>
<tr>
<td>
**Secondary data**
</td>
<td>
Secondary data are data that already exist, regardless of the research to be
conducted.
</td> </tr>
<tr>
<td>
**Open access**
</td>
<td>
Open access is understood as the principle that research data should be
accessible to relevant users, on equal terms, and at the lowest possible cost.
Access should be easy, user-friendly and, if possible, Internet-based.
</td> </tr>
<tr>
<td>
**Metadata**
</td>
<td>
</td>
<td>
Metadata is data used to describe other data. It summarizes basic information
about data, which can make finding and working with instances of data easier.
</td> </tr>
<tr>
<td>
**Research repositories**
</td>
<td>
**data**
</td>
<td>
Research data repositories are online archives for research data. They can be
subject based/thematic, institutional or centralised.
</td> </tr> </table>
Considering privacy and data protection issues scientific research data should
be easily discoverable, accessible, assessable and intelligible, useable
beyond the original purpose for which it was collected and interoperable to
specific quality standards.
Project piloting in the Open Research Data activity, have to consider the
following aspects 3 : Regarding the digital research data generated in the
action (‘data’), the beneficiaries must deposit in a research data repository
and take measures to make it possible for third parties to access, mine,
exploit, reproduce and disseminate — free of charge for any user — the
following:
1. the data, including associated metadata, needed to validate the results presented in scientific publications as soon as possible;
2. other data, including associated metadata, as specified and within the deadlines laid down in the data management plan.
Projects must take measures to enable for third parties to access, mine,
exploit, reproduce and disseminate research data, thereby attach Creative
Commons License (CC-BY or CC0 tool) to the data deposited. More information on
Creative Commons Licenses can be found on the respective Web site
creativecommons.org 4 .
# Use of Data in the Research Process
To enable the reuse of research data, the data must be processed after they
are generated. The value chain for digital research data is divided into four
main steps:
Secondary
data
Data
collection
Processing &
analysis
Storage &
publishing
Long
\-
term
management
Access to data
& metadata
The first step is the collection or creation of data, where the data are kept
in a workspace. It is best practice to employ accepted documentation standards
now at this stage, as it will simplify subsequent efforts relating to long-
term storage and publication.
In the second step, the data are quality assured. This is a process for
determining whether the raw data are to be preserved. Analysis of the quality-
assured data is also carried out during this phase.
The third step is preparing the data for long-term archiving and publication.
This involves discipline-specific documentation and coding of data (metadata)
describing i.e. who is responsible for the data, what the data contain, what
has been done with the data, who can use them for which purpose. These
metadata will make it possible for other researchers to find and reuse the
data. Relevant information about the data will be indexed in search
engines/catalogues when the data are transferred to a data repository. As the
data are stored in a data repository, the data set is assigned an identifier
which follows the data set throughout its lifetime. The data are now
accessible to other researchers.
When the data are stored in a repository they are preserved for the future.
This usually entails a storage period of minimum five years or more. Moreover,
data deposited in long-term storage must be altered to new technologies and
formats to ensure that they remain findable, available, interoperable and
reusable in the future.
# FAIR Data
The international FAIR Principles have been formulated as a set of guidelines
for the reuse of research data. The acronym FAIR stands for findable,
accessible, interoperable and reusable. 5 Research data must be of quality
that makes them accessible, findable and reusable.
## Making data findable, including provisions for metadata
Data will be stored at the coordinator's (Western Norway Research Institute)
_Dropbox_ repository and will be kept for 5 years after the end of the
project. Where requested, data will be kept for 2 more years.
A naming convention will include a concise description of contents, the host
institution collecting the data and the month of publication.
Version numbering will only be an issue if a participant requests withdrawal
of their data in which case a version number will be added to the filename.
No specific standards or metadata have been identified for the time being for
the proposed datasets.
The real names of participants will NOT be distributed.
Data will be shared only in relation to publications (deliverables and
papers). As such, the publication will serve as the main piece of metadata for
the shared data. When this is not seen as being adequate for the comprehension
of the raw data, a report will be shared along with the data explaining their
meaning and methods of acquisition.
## Making data openly accessible
Where possible data will be made available subject to Ethics and participant
agreement. However, the personally-identifiable nature of the data collected
within LeMO means that in most instances it would be difficult to release
collected data. Where data is made available we will do so using the Western
Norway Research Institute’s Dropbox.
Prior to release, a requesting party will need to contact the Project
Coordinator describing their intended use of a dataset. The Project
Coordinator will send a terms and conditions document for them to sign and
return. Upon return, the dataset will be released. Documentation will be
included with the release of the data.
## Increase data re-use through clarifying licenses
Due to the sensitive nature of the data they will only be available on
application and their use will be restricted to the research use of the
licensee and colleagues on a need-to-know basis. This non-commercial licence
is renewable after 2 years, data may not be copied or distributed and must be
referenced if used in publications. These arrangements will be formalised in a
User Access Management licence which describes in detail the permitted use of
the data.
## Interoperability
The concept interoperable demands that both data and metadata must be machine-
readable and that a consistent terminology is used.
# Other Issues
## Allocation of resources
Data will be stored at the Western Norway Research Institute's Dropbox
repository and will be kept for 5 years after the end of the project. Where
requested, data will be kept for 2 more years.
## Data security
Data is managed and supported by a team of big data researchers at the Western
Norway Research Institute and subject to the institute’s data security
measures and backup policies.
Transfer of data is via a Zip process of distribution.
Encryption of sensitive data using shared-key methods.
Password distributed separately.
## Ethical aspects
All our work is subject to ethical approval (locally, via an Ethics Committee
Chair and the EC REA). Prior to data collection related to case studies
participants will agree to the terms and conditions outlined in a Participant
Information and Consent Form. An ethical approach will be adopted and
maintained throughout the case studies process. The responsible partners will
assure that the EU standards regarding ethics and Data Management are
fulfilled. Each partner will proceed with the survey according to the
provisions of the national legislation that are adjusted according to the
respective EU Directives for Data Management and ethics.
## Datasets
To be added when relevant.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1351_OPTICS2_770138.md
|
# Executive summary
This plan describes the data management life cycle for the data to be
collected, processed and/or generated by the project. As part of making
research data findable, accessible, interoperable and re-usable (FAIR), it
includes information on:
» The handling of research data during and after the end of the project,
» What data will be collected, processed and/or generated,
» Which methodology and standards will be applied,
» Whether data will be shared/made openly accessible and,
» How data will be curated and preserved (during and after the end of the
project).
The DMP will be **updated** over the course of the project whenever
significant changes arise, such as (but not limited to):
» New data;
» Changes in consortium policies;
» Changes in consortium composition and external factors (e.g. new consortium
members joining or old members leaving).
The Data Management Plan (DMP) will be updated as a minimum in time with the
final review of the project; specifically, for the final review, it will
report the data management activities, showing compliance to the approach
depicted in the DMP.
# 1 INTRODUCTION
## 1.1 SCOPE OF THE DOCUMENT
The purpose of this document is to describe the Data Management approach and
provide the plan for managing the data generated and collected during the
project, the Data Management Plan (DMP).
By following the principles outlined by the “Guidelines to the Rules on Open
Access to Scientific
Publications and Open Access to Research Data in Horizon 2020” and the
“Guidelines on FAIR Data Management in Horizon 2020”, the DMP specifically
describes the data management life cycle for all datasets to be collected,
processed and/or generated by the project. It includes.
» The handling of data during and after the project,
» What data will be collected, processed and generated,
» What methodology and standards will be applied, » Whether data will be
shared/made available and how, » How data will be stored and preserved.
## 1.2 STRUCTURE OF THE DOCUMENT
This document is divided in six sections.
Section 1 defines the objectives, scope and structure of the document.
Sections 2 and 3 describe the Data summary, FAIR DATA, Making data openly
accessible, interoperable, reusable.
Sections 4, 5 and 6 provides the allocation of resources, Data security and
Ethical aspects.
# 2 DATA SUMMARY
OPTICS2 aims at providing a comprehensive evaluation of relevant Safety and
Security research & innovation (R&I) in aviation and air transport. The main
objective of the project is assessing if Europe is performing the right safety
and security research and if the research is delivering the expected benefits
to society. OPTICS2 will be the continuation of the work started in OPTICS.
The successful methodology for assessing European Research developed in OPTICS
will be further refined in OPTICS2 – based on the lessons learnt over the past
four years – and extended to the security research as well. The purpose is to
offer to ACARE and to other key aviation stakeholders a wider perspective on
the recent and on-going research in Europe and to show where are the main gaps
and bottlenecks towards the achievement of Flightpath 2050 Safety and Security
goals.
In this framework the data collected will be mainly information related to:
» Ongoing or completed research projects (project results and related
documentation, reference documents, etc.) in the field of safety and security
in the transport domain to complement the OPTICS2 State-of-the-Art. Various
sources will be used, mostly referring to CORDIS, project’s websites and
project’s coordinators);
» External stakeholders involved in different types of activities e.g.
workshops, interviews, roundtable consultations and dissemination events.
These may include basic personal data for the organisation of the project’
events or for performing interviews, speakers’ presentations at workshops,
consultations, etc. (for this please see also D7.1).
In addition to the above mentioned, OPTICS2 will use the results obtained by
the OPTICS project in which the majority of the OPTICS2 partners (DBL, ECTL,
NLR, CIRA, ONERA, DLR) were also involved.
Data resulting from these activities will be subjected to EU legislation,
including the “Regulation (EU) 2016/679 of the European Parliament and of the
Council of 27 April 2016 on the protection of natural persons with regard to
the processing of personal data and on the free movement of such data, and
repealing Directive 95/46/EC (General Data Protection Regulation)” that will
apply from 25 May 2018\.
The generated data will be mainly an open repository, reports on the state-of-
the-art assessment and roundtable consultations and materials on workshops and
dissemination events (agendas, proceedings, etc..), plus internal
communication documents (such as minutes of meetings, emails, agreements,
etc..) which will all be available internally on the InGrid tool, Confluence
software (developed by Atlassian) and in the project deliverables. They will
be treated differently according to the degree of confidentiality (public or
confidential) stated in the Grant Agreement.
Table 1 summarises the data planned to be collected and generated within the
project duration.
_Table 1: Data planned to be collected and generate_
<table>
<tr>
<th>
**Type of data**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
Collected data
</td>
<td>
Research project documentation
</td> </tr>
<tr>
<td>
Reference documents
</td> </tr>
<tr>
<td>
Basic personal data of external stakeholders involved in events, interviews,
etc..
</td> </tr>
<tr>
<td>
</td>
<td>
Presentations for workshops, events
</td> </tr>
<tr>
<td>
Generated data
</td>
<td>
Open Repository
</td> </tr>
<tr>
<td>
Assessment results for each project (first output of the assessment)
</td> </tr>
<tr>
<td>
State-of-the-art results and assessment syntheses per
SRIA action area/line
(processing of the above-mentioned assessment results per project)
</td> </tr>
<tr>
<td>
Reports on roundtables consultations
</td> </tr>
<tr>
<td>
Reports and material on workshops and dissemination events
</td> </tr>
<tr>
<td>
Minutes of the meetings, webex, etc..
</td> </tr> </table>
The following principles will be used to guide the data management life cycle:
» No data, which is not strictly necessary to accomplish the current study,
will be collected; data minimisation policy will be adopted at any level of
the study;
» The research activities involving external stakeholders will be strictly
limited to those engaged on a voluntary basis. Participants will be asked to
sign a consent form (the detailed procedure to involve external stakeholders
is explained in D7.1). Moreover, participants to OPTICS2 activities are
authorised to request the removal of their recorded data at any moment;
» The collected data will be made available always as aggregated metadata.
Data will not be modified or falsified;
» Anonymity of data will always be guaranteed. The project will neither
collect nor store personal and biographical data (e.g. telephone numbers,
address of residence, age, sex, etc.);
» The exchange of data between the partners involved in the project will be
done through a reliable tool and platform, Confluence software;
» Specific care will be applied when research and validation activities are
linked to sensitive areas (e.g. confidential security data, personal aspects,
etc.).
The data collected and generated by OPTICS2 will be used by the consortium
partners. The aggregated data will be included in reports that will be made
available to the European Commission and stakeholders.
All the data collected, processed and generated by OPTICS2 will be available
to the project partners through a wiki-like online tool named " InGrid",
Confluence software (Atlassian developer). The tool is accessible at
https://research.innaxis.org. The access permissions will be managed by the
project coordinator. This software has been properly adapted to the specific
requirements of OPTICS2 in order to serve as a useful Management Information
System for the project. The suitability of InGrid for this kind of project has
been largely proven by Innaxis (part of the OPTICS2 consortium) in the past
several years with very satisfactory results. Its features of InGrid that make
it applicable for this project are described in the following list:
» Customisable: the structure of any space can be adapted to the specific
requirements of the purpose for which it is created. The necessary structure
of pages can be developed and easily changed at any moment. The layout and
appearance can also be personalised.
» Scalable: the main actions within the tool can create, edit and comment on
pages as well as attach documents (of any format). There are no practical
limits in the number of pages, attachments or comments.
» Online: easily accessed from anywhere in the world with internet connection.
It works properly without any pre-requisite downloads and with almost every
browser (Explorer, Firefox, Chrome, Safari etc.) in any operative system
(Windows, iOS, Linux, Android).
» Wiki-like: specifically designed to take into consideration the
collaborative paradigm which is key for the success of such a project.
» Secure: The tool allows a secure management of user access. The access
requires a user and a secure password which is specific to each person
participating in the project. The particular permissions of each user can be
restricted, including the permissions of viewing or editing particular pages
of the space. This feature also enables the creation of groups of users who
can only see certain part(s) of the project, as needed.
» User-friendly: Advanced features of Confluence software can be easily learnt
by any user over a short period of time.
» Dynamic and Real-time: with the necessary commitment of its users,
Confluence software is a very dynamic tool which shows the updates of each
part of the project in real time. This allows every participant to have access
to the latest version of each section of the project, facilitating the flow of
information and exchange of outputs/inputs among tasks and deliverables. Also,
it enables reducing to a bare minimum the review process of several versions
which is usually time-consuming and prone to errors.
» Reliable: thanks to the possibility to see the history of changes Confluence
software results to be a very reliable instrument that guarantees the safe
storage of data.
» Suitable as a repository of deliverables and other relevant documents
(including history of changes): all deliverables of every WP will be stored in
a single page so they can be easily found. Another page will be created
containing all the minutes of the meetings. It’s really important to stress
that Confluence software stores not only the last version of each attachment
but also the previous ones even though the same title is given, so, if
necessary, old versions can be consulted or restored.
The DMP will be updated as a minimum in time with the final review of the
project.
# 3 FAIR DATA
## 3.1 MAKING DATA FINDABLE, INCLUDING PROVISIONS FOR METADATA
All the data will be available to the project partners on InGrid tool,
Confluence software (see chapter 2 for detailed description). In addition,
public deliverable, including only aggregated data or data coming from already
public sources, will be available on the OPTICS2 website
(http://www.opticsproject.eu/products/).
Formal issues of reports and other documents shall be exported to .doc and
.pdf formats and submitted by the Project Coordinator to the European
Commission by electronic means (Participant Portal system), using the template
developed at the beginning of the project. A repository list recording all the
documents produced by the Consortium members shall be available through the
InGrid page "Final deliverables," which will also include documents required
by the EC out of the formal deliverables (Reports, Publishable Summary,
Results, etc.).
Regarding OPTICS2 naming convention, it will follow the DoA indications in
case of the deliverables (e.g. D_WP number_deliverable number). In case of the
project assessment, the generated excel file name will include the following
information: general category (Security or Safety), funding programme (SESAR,
CleanSky, etc.) and reference to the workflow (revision level).
Furthermore, all Microsoft Word and Adobe Acrobat versions shall be kept for
future reference and stored in InGrid, Confluence software. The Coordinator
shall be responsible for keeping the List updated and for transferring the
public deliverables to the website.
In addition to being available through the Participant Portal and public
website (if applicable), all the deliverables will be submitted by the
deadlines in an electronic format (pdf or MS Word compatible) by e-mail to the
EC project officer in charge of the project if he requires it. Simone Pozzi
([email protected]) from Deep Blue, is the point of contact regarding
OPTICS2 documentation and repositories.
OPTICS2 does not create metadata.
## 3.2 MAKING DATA OPENLY ACCESSIBLE
The generated data that were identified as public in the Grant Agreement
together with the scientific publications that could arise from this project
will be made available on the project website. In particular the publications
will be given green access Open Access (OA), also referred to as self-
archiving, that is the practice of placing a version of an author’s manuscript
into a repository, making it freely accessible for everyone.
According to the DoA, OPTICS2 should create an Open Database containing the
assessed safety and security projects and the consolidated assessment results.
Deliverable D1.2 due at Month 10 will define the requirements for the
development of such database.
As stated in article 29.2 of the Grant Agreement open access to all peer-
reviewed scientific publications relating to the results of the project will
be ensured. To meet this requirement, the coordinator will, at the very least,
ensure that any scientific peer-reviewed publication can be read online,
downloaded and printed. Open access will be provided also to other types of
scientific publication including, if applicable, monographs, books, conference
proceedings, grey literature (informally published written material not
controlled by scientific publishers, e.g. reports).
3.3 MAKING DATA INTEROPERABLE
Not applicable.
## 3.4 INCREASE DATA RE-USE (THROUGH CLARIFYING LICENSES)
OPTICS2 does not envisage providing any data licensing, and the disclosable
data (most of the time based on publicly available data), according to the
agreed disclosure rules, will be freely accessible. It is expected to keep
OPTICS2’s data for four years after project completion, period in which they
may be re-used.
# ALLOCATION OF RESOURCES
Costs for making data FAIR in OPTICS2 were not allocated to the software tool
implementation and maintenance under partners "other direct costs" since the
used tool (InGrid, Confluence software) is already owned by Innaxis that uses
it for the management of several projects and for the internal management of
resources.
The person responsible for coordinating the data management process will be
the Project Coordinator, and the Project Management Board will be the ultimate
decision-making body of the Consortium and responsible for taking major
strategic decisions with respect to data management if necessary. The effort
for coordinating the data management process is included in the coordinator’s
WP6 effort.
The long-term preservation of data will be discussed with the EC at the
earliest convenience and required resources will be allocated from the OPTICS2
WP5 budget, if necessary.
# DATA SECURITY
The following rules will apply to data management in order to ensure data
security:
» The data will be stored on servers at the EU-based project partners’
institutions and analysed. Backups will be stored on encrypted hard disks. The
computers and hard disks will be either in a project room or in the data-
centre of selected institutions, password-protected and protected by general
security measures (such as a swipe card entry system at the door) that
guarantee that only employees with the appropriate training and right of
access have access to the rooms where the computers and data are stored;
» Data will consist of digital or/and paper data. Digital data will be stored
on hard disks disconnected from the network, which will be stored on secured
drawers; and/or on secured servers, with defined protocol that limit access to
authorised personnel. Paper data will be stored on secured drawers with
limited access to authorised personnel;
» Strong passwords will be established. These will include a combination of
capital and lowercase letters, numbers, and symbols, and will be 8 to 12
characters long. Use of any personal data (such as birthdays), common words
spelled backwards, sequences of characters or numbers, or those that are close
together on the keyboard, will definitely be avoided; » Password checkers will
be used to assess the strength of the passwords.
Every individual member of the OPTICS2 Consortium will have his/her own
username and password for any login system, from desktops to CMS. Shared
passwords will not be allowed;
» Writing down of passwords will not be allowed;
» Frequent password changes will be encouraged;
» Data access will be limited to authorised staff only, and only within the
spatial and temporal limits negotiated with the data owners and in no case
beyond what prescribed by the current legislation. The people responsible for
the management of data will be the ones mentioned in the informed consent
document presented to the user;
» All changes in the data and documents will be easily trackable through the
history of changes thus guaranteeing their reliability.
As already mentioned, Confluence software will be used as collaborative tool
to exchange data across project partners. Some data may be also exchanged
through e-mail. In order to have a properly protected network, a strong
firewall will be put on the Personal Computer of all individual members of
Project Consortium. This will protect the OPTICS2 network by controlling
internet traffic coming into and flowing out of business. In line with this,
Antivirus and anti-malware software will be recommended as compulsory tools in
any computer used by the OPTICS2 Consortium. All the aforementioned security
applications will be regularly updated.
Raw data will be stored until the end of the project and then deleted from
both servers and backup disks.
Project documentation, including technical documents describing the results of
research activities, will be archived after the end of the project in
password-protected servers.
# ETHICAL ASPECTS
Ethical aspects of data protection and management are addressed in
Deliverables 7.1 and 7.2.
According to the Italian Regulation and following the Checklist available on
the Italian Personal Data
Protection Authority website (accessible via this link:
_https://web.garanteprivacy.it/rgt/NotificaInserimento.php?h_act=U
&x=5.535781156837938) _ , OPTICS2 does not require any ethical approval.
Indeed, OPTICS2 will only collect basic personal data for the organisation of
the projects’ events aiming at collecting experts’ opinions to complement the
State-of-the-Art results. Such data will fall into the category of personal,
non-sensitive, nonjudicial information.
# REFERENCE
European Commission – Directorate-General for Research & Innovation (2016).
_Guidelines on FAIR_
_Data Management in Horizon 2020_ . Version: 3.0. URL:
_http://ec.europa.eu/research/participants/data/ref/h2020/grants_manual/hi/oa_pilot/h2020-hioa-
data-mgt_en.pdf_
European Commission – Directorate-General for Research & Innovation (2017).
_Guidelines to the Rules on Open Access to Scientific Publications and Open
Access to Research Data in Horizon 2020_ .
Version 3.0 (21 March 2017). URL:
_http://ec.europa.eu/research/participants/data/ref/h2020/grants_manual/hi/oa_pilot/h2020-hioa-
pilot-guide_en.pdf_
European Parliament and Council (2016). _Regulation EU 2016/679: on the
protection of natural persons with regard to the processing of personal data
and on the free movement of such data, and repealing Directive 95/46/EC
(General Data Protection Regulation)_ . URL: _http://eurlex.europa.eu/legal-
content/EN/TXT/PDF/?uri=CELEX:32016R0679 &from=EN _ OPTICS2 (2018). _D7.1 H -
Requirement No. 1 (Ethics)_ . Submitted.
OPTICS2 (2018). _D7.2 POPD - Requirement No. 2 (Ethics)_ . Submitted.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1357_SWINOSTICS_771649.md
|
# Introduction
This document deals with the research data produced, collected and preserved
during the project. This data can either be made publicly available or not
according to the Grant Agreement and to the need of the partners to preserve
the intellectual property rights and related benefits derived from project
results and activities.
Essentially, the present document will answer to the following main questions:
* What types of data will the project generate/collect?
* What data is to be shared for the benefit of the scientific community?
* What data cannot be made available? Why?
* What format will the shared data have?
* How will this data be exploited and/or shared/made accessible for verification and re-use?
* How will this data be curated and preserved?
The data that can be shared will be made available as Open access research
data; this refers to the right to access and re-use digital research data
under the terms and conditions set out in the Grant Agreement. Openly
accessible research data can typically be accessed, mined, exploited,
reproduced and disseminated free of charge for the user.
The SWINOSTICS project abides to the European Commission's vision that
information already paid for by the public purse should not be paid for again
each time it is accessed or used, and that it should benefit European
companies and citizens to the full. This means making publicly-funded
scientific information available online, at no extra cost, to European
researchers, innovative industries and citizens, while ensuring long-term
preservation.
The Data Management Plan (DMP) is not a fixed document but evolves during the
lifespan of the project. The following are basic issues that will be dealt
with for the data that can be shared:
* **Data set reference and name**
The identifier for the datasets to be produced will have the following format:
SWINOSTICS_[taskx.y]_[descriptive name]_[progressive version number]_[date of
production of the data]
* **Data set description**
Description of the data that will be generated or collected will include:
* Its origin, nature, scale and to whom it could be useful, and whether it underpins a scientific publication. Information on the existence (or not) of similar data and the possibilities for integration and reuse.
* Its format
* Tools needed to use the data (for example specialised software)
* Accessory information such as possible video registration of the experiment or other.
* **Standards and metadata**
Reference to existing suitable standards of the discipline. If these do not
exist, an outline on how and what metadata will be created, if necessary.
* **Data sharing**
Description of how data will be shared, including access procedures, embargo
periods (if any), outlines of technical mechanisms for dissemination and
necessary software and other tools for enabling re-use, and definition of
whether access will be widely open or restricted to specific groups.
Identification of the repository where data will be stored, if already
existing and identified, indicating in particular the type of repository
(institutional, standard repository for the discipline, etc.). In case the
dataset cannot be shared, the reasons for this should be mentioned (e.g.
ethical, rules of personal data, intellectual property, commercial, privacy-
related, security-related).
* **Archiving and preservation (including storage and backup) and access modality**
Description of the procedures that will be put in place for long-term
preservation of the data. Indication of how long the data should be preserved,
what is its approximated end volume, what the associated costs are and how
these are planned to be covered.
# Types of data generated within the project
The project is expected to generate the following types of data:
1. Data from the preliminary experiments performed to evaluate the Molecular Recognition Elements (MREs) and the PIC surface functionalization (WP2)
2. Data regarding the MREs characterization (WP3-T3.1)
3. Data regarding the PIC surface characterization (WP3-T3.4)
4. Data regarding the PIC sensor testing (WP3-T3.5)
5. Mobile application (WP4-T4.1)
6. Cloud-based software (WP4-T4.2)
7. Design and testing data regarding the operation of the sample delivery and microfluidics subsystem, as well as regarding the bio-sensing surface regeneration module (WP4-T4.3-T4.7)
8. Design and testing data regarding the main processing unit and firmware (WP4-T4.4-T4.8)
9. Design and testing data regarding the optical subsystem (WP4-T4.5)
10. Design and testing data regarding the temperature conditioning subsystem (WP4-T4.6)
11. Mid-project testing results (WP5-T5.2)
12. Lab validation results using reference samples (WP5-T5.4)
13. Lab validation results using field samples (WP5-T5.6)
14. Device performance assessment data after the field tests (WP6-T6.6)
15. Impact assessment data and system usability analysis (WP6-T6.7)
16. Scientific and technical papers, posters, videos, other publications (WP7-T7.1) (q) Business plan (WP7-T7.2)
## Data to be shared and not
Some of the above data will be shared with the scientific community, following
the principles of open data sharing. In particular:
* Data from category **(a)** might be shared with the scientific community, following research publications by CNR, UPV, AUA and LUME. In case of accepted publications, selected datasets will be shared with the community for verification purposes.
* Data from categories **(b)** and **(c)** might be shared with the scientific community, following research publications by CNR. In case of accepted publications, selected datasets will be shared with the community for verification purposes.
* Data from category **(d)** will not be shared under any circumstances, since those refer to prototypes and possibly commercial products of LUME.
* Data from categories **(e)** and **(f)** will be shared with the community. We will not share technical details on the implementation of the App and the Cloud platform, but the software applications will be available to the public for some basic functions and information about the SWINOSTICS device.
* Data from categories **(g), (h), (i) and (j)** will not be shared under any circumstances, since those refer to prototypes and possibly commercial products of CyRIC, K46 and ISS.
* Data from category **(k)** will not be shared, since those will be intermediate data, used to improve the system.
* Data from categories **(l)** , **(m), (n)** and **(o)** might be shared with the scientific community, following research publications by all partners. In case of accepted publications, selected datasets will be shared with the community for verification purposes. Particular emphasis will be paid to IPR protection, since possibly sensitive data might be available particularly in the results of those tests.
* Data from category **(p)** will be shared with the community.
* Data from category **(q)** will not be shared, since it is confidential information.
# Management plan for the different sharable categories of data
In the sections that follow, details on the data to be shared with the
community are presented. Focus is put on: a) Dataset description
b) Standards and metadata, where applicable
## Data from preliminary experiments
The data related to the preliminary experiments performed to evaluate the
Molecular Recognition Elements (MREs) and the PIC surface functionalization
are mainly output of WP2. Those experiments were not originally foreseen, but
it was decided by the consortium that it would be very helpful in view of
preparation of the specifications to perform them. The following types of data
will be produced:
1\. Experimental data and figures, used also in the deliverables (D2.2 and
D2.3)
_Information about tools and instruments:_
Documentation related documents will be available either in the Microsoft Word
format or as a PDF file. Measured data will be available in the form of Bruker
OPUS software, an Excel table or Origin files derivate from the Origin
software.
_Standards and metadata:_
To guarantee a high academic standard, results will be documented considering
the typical nomenclature of the particular field of research.
Measured data are either self-explanatory or will be prepared or described by
an additional document. Metadata will be made available for each document or
data set.
_Accessibility:_
Once a dataset is made available on the project repository, it will freely be
available to the community, in the way described in the appropriate section
below. No embargo period is foreseen.
## Data from MREs characterisation
Data from the testing and characterization of the molecular recognition
elements (MREs) used for the PIC functionalisation will be mainly generated as
an output of WP3, in particular as an output of task T3.1. Such data will
include the description of the experimental procedure, as well as measurements
needed for the characterization of the MREs selected, such as ELISA, surface
plasmon resonance (SPR) and fluorescence steady state measurements.
The following types of data will be produced:
1\. Possible data to use in scientific and technical publications
_Information about tools and instruments:_
Documentation related documents will be available either in the Microsoft Word
format or as PDF file. Measured data will be available as Excel table or
Origin files derivate from Origin software.
_Standards and metadata:_
To guarantee a high academic standard, results will be documented considering
the typical nomenclature of the particular field of research.
Measured data are either self-explanatory or will be prepared or described by
an additional document. Metadata will be made available for each document or
data set.
_Accessibility:_
Once a dataset is made available on the project repository, it will freely be
available to the community, in the way described in the appropriate section
below. No embargo period is foreseen.
## Data regarding the PIC surface characterisation
Data from the characterization of the functionalized PIC will be mainly
generated as an output of WP3, in particular as an output of task T3.4. Such
data will include the description of the experimental procedure, as well as
measurements needed for the characterization of the MREs selected, such as
ELISA, surface plasmon resonance (SPR) and fluorescence steady state
measurements.
The following types of data will be produced:
1\. Possible data to use in scientific and technical publications
_Information about tools and instruments:_
Documentation related documents will be available either in the Microsoft Word
format or as PDF file. Measured data will be available as Excel table or
Origin files derivate from Origin software.
_Standards and metadata:_
To guarantee a high academic standard, results will be documented considering
the typical nomenclature of the particular field of research.
Measured data are either self-explanatory or will be prepared or described by
an additional document. Metadata will be made available for each document or
data set.
_Accessibility:_
Once a dataset is made available on the project repository, it will freely be
available to the community, in the way described in the appropriate section
below. No embargo period is foreseen.
## Data regarding the mobile application development
No research data will be produced through this activity. Furthermore, we will
not share technical details on the implementation of the mobile application,
but the actual App will be available to the public, at least with some basic
functionalities.
## Data regarding the cloud software development
No research data will be produced through this activity. Furthermore, we will
not share technical details on the implementation of the application, but the
actual cloud software will be available to the public (at least for viewing
basic information and understanding the scope of the platform).
## Data regarding the lab validation using reference samples
The data in this category is collected during lab tests of the SWINOSTICS
device. Data collected is related to the developments of T5.4. The task is
planned to run from M26 to M28. Data that may be suitable for publicly sharing
could be related to the overall device testing and its outputs, as well as the
possible testing of individual components performance. The data will be
available after ensuring the protection of possible IP issues and following
the acceptance of related scientific or technical publications.
Data will be shared in the form of .zip packets and may include: spreadsheets
with experimental results, photos from the setup, documents describing the
experiment and the samples (reference samples) used.
At the moment, it is not possible to estimate the file size, but it is not
expected to be higher than 100MB per experiment, mostly due to the possible
presence of images.
The possible use or re-use by the scientific community is provided by adhering
to terms and conditions set out in the Grant Agreement.
_Information about tools and instruments_ : It is not foreseen at the moment
the need to use specialised tools for using the research data, apart from
common spreadsheet and text file editors. If any specialised instrument is
required for a particular dataset, then information about this tool will be
published along with the dataset. If the tool is freely available, a link for
downloading it will also be provided.
Any scientific papers to be published that will make use of the datasets, will
also be associated to the datasets and be accessible through the same page.
_Standards and metadata_ : We will also create metadata for easier data
discovery, including through search engines. Information on the reference
samples will also be published along with the dataset.
_Accessibility_ : Once a dataset is made available on the project repository,
it will freely be available to the community, in the way described in the
appropriate section below. No embargo period is foreseen.
## Data regarding the lab validation using field samples
The data in this category is collected during lab tests of the SWINOSTICS
device. Data collected is related to the developments of T5.6. The task is
planned to run from M31 to M34. Data that may be suitable for publicly sharing
could be related to the overall device testing and its outputs, as well as the
possible testing of individual components performance. The data will be
available after ensuring the protection of possible IP issues and following
the acceptance of related scientific or technical publications.
Data will be shared in the form of .zip packets and may include: spreadsheets
with experimental results, photos from the setup, documents describing the
experiment and the samples used.
At the moment, it is not possible to estimate the file size, but it is not
expected to be higher than 100MB per experiment, mostly due to the possible
presence of images.
The possible use or re-use by the scientific community is provided by adhering
to terms and conditions set out in the Grant Agreement.
_Information about tools and instruments_ : It is not foreseen at the moment
the need to use specialised tools for using the research data, apart from
common spreadsheet and text file editors. If any specialised instrument is
required for a particular dataset, then information about this tool will be
published along with the dataset. If the tool is freely available, a link for
downloading it will also be provided.
Any scientific papers to be published that will make use of the datasets, will
also be associated to the datasets and be accessible through the same page.
_Standards and metadata_ : We will also create metadata for easier data
discovery, including through search engines. Information on the samples after
their analysis with the golden standard techniques will also be published
along with the dataset.
_Accessibility_ : Once a dataset is made available on the project repository,
it will freely be available to the community, in the way described in the
appropriate section below. No embargo period is foreseen.
## Device performance assessment data
Data from this category might be shared with the scientific community,
following research publications by all partners. In case of accepted
publications, selected datasets will be shared with the community for
verification purposes. Particular emphasis will be paid to IPR protection,
since possibly sensitive data might be available particularly in the results
of those tests.
The data in this category is generated and collected during and after the
field validation of the SWINOSTICS device. Data collected is related to the
developments of T6.6. The task is planned to run from M36 to M42. Data that
may be suitable for publicly sharing could be related to the overall device
testing and its outputs, as well as the possible testing of individual
components performance. The data will be available after ensuring the
protection of possible IP issues and following the acceptance of related
scientific or technical publications.
Data will be shared in the form of .zip packets and may include: spreadsheets
with experimental results, photos from the setup, documents describing the
validations, information about the test site and the comparison measurements
with golden standard procedures.
At the moment, it is not possible to estimate the file size, but it is not
expected to be higher than 100MB per experiment, mostly due to the possible
presence of images.
The possible use or re-use by the scientific community is provided by adhering
to terms and conditions set out in the Grant Agreement.
_Information about tools and instruments_ : It is not foreseen at the moment
the need to use specialised tools for using the research data, apart from
common spreadsheet and text file editors. If any specialised instrument is
required for a particular dataset, then information about this tool will be
published along with the dataset. If the tool is freely available, a link for
downloading it will also be provided.
Any scientific papers to be published that will make use of the datasets, will
also be associated to the datasets and be accessible through the same page.
_Standards and metadata_ : We will also create metadata for easier data
discovery, including through search engines. During system field testing,
comparison with golden standard methodologies is foreseen, in order to
generate ground truth data. Information about the golden standard methodology
used for comparison will also be published.
_Accessibility_ : Once a dataset is made available on the project repository,
it will freely be available to the community, in the way described in the
appropriate section below. No embargo period is foreseen.
## Data related to impact assessment and system usability analysis
The impact of the SWINOSTICS pilot campaigns in all locations will be
evaluated after the demos. Initial situation information will first be
collected, in order to be able to evaluate the effect of the device use.
Assessment will be carried out based on the KPIs mentioned in the DoA. Impact
assessment takes into consideration the RoI for the end-users. System
usability will also be evaluated with the help of the end-users.
The data in this category is generated and collected during and after the
field validation of the device. Data collected is related to the developments
of T6.7. The task is planned to run from M38 to M42. Data that may be suitable
for publicly sharing could be related to the impact assessment of the device
use or to the usability analysis. The data will be available after ensuring
the protection of possible IP issues and following the acceptance of related
scientific or technical publications.
Data will be shared in the form of .zip packets and may include: photos from
the demos, documents describing the initial situation, possible financial
considerations, information about the test site, system usability
questionnaires analysis.
At the moment, it is not possible to estimate the file size, but it is not
expected to be higher than 100MB per experiment, mostly due to the possible
presence of images.
The possible use or re-use by the scientific community is provided by adhering
to terms and conditions set out in the Grant Agreement.
_Information about tools and instruments_ : It is not foreseen the need to use
specialised tools for using the research data, apart from common spreadsheet
and text file editors.
Any scientific papers to be published that will make use of the datasets, will
also be associated to the datasets and be accessible through the same page.
_Standards and metadata_ : We will also create metadata for easier data
discovery, including through search engines. If data related to financial
projections will be published, information about the methods used for the
projections will also be published.
_Accessibility_ : Once a dataset is made available on the project repository,
it will freely be available to the community, in the way described in the
appropriate section below. No embargo period is foreseen.
## Scientific and technical papers, posters, videos and other publications
Data from this category might be shared are meant for sharing with the
research community or even the general public. All scientific publications
will be made following the green or golden “open access” model. Non-scientific
publications will be shared freely with every interested party.
Data collected is related to the developments of T7.1, which runs throughout
the project. Information on the planned publications is included in the
Dissemination Plan (D7.1), which is regularly updated.
Publications will be shared in the form of pdf, video or photo files. At the
moment, it is not possible to estimate the file size.
The possible use or re-use by the scientific community is provided by adhering
to terms and conditions set out in the Grant Agreement, as well as to the
terms of the publisher.
_Information about tools and instruments_ : It is not foreseen the need to use
specialised tools. Any datasets used in scientific papers will also be
accessible through the same location.
_Accessibility_ : Once a publication is made available on the project
repository, it will freely be available to the community, in the way described
in the appropriate section below. No embargo period is foreseen.
# Data sharing
All sharable data will be published and hosted as per individual availability
on the project’s public website i.e. _www.swinostics.eu_ . Partners generating
the data are also encouraged to publish the sharable data on other online
repositories (for example, Zenodo.org).
The SWINOSTICS website has a friendly and easy to use navigation. It will be
modified in due time to accommodate additional sections (pages) where the
publishable data will be stored. The consortium will make sure that available
data will be easily recoverable by any interested party.
The data will be made available on the website through adaptive webpages. The
pages will cover the topics and project descriptive information to an
appropriate level for each set of information or dataset.
The data will be formatted as per the description of each section, provided
previously in this document, and will be presented for access along with the
necessary links to download the appropriate software tools, if necessary.
The pages will be available to the public domain, enriched with the necessary
metadata and will be open to web crawlers for search engine listing, so they
will be available to the public through standard web searches.
Despite the publicly available pages, the downloadable data will be presented
along with possible restrictions provided in each previously described
section. This means that the following will apply on the website in order to
gain access to the information:
1. Terms and Conditions will apply and will have to be accepted prior to any download
2. Registration will be compulsory (free of charge) to maintain a data access record
3. For certain and limited number of datasets, a form will be available to request access to the data, but this will be subject to approval from the consortium
Downloadable formats will be:
1. PNG, BMP and JPEG file formats
2. ZIP and other public domain compressed archives
3. PDF formatted documents
4. WMV, MP4 or AVI formats for possible videos
All available datasets will be downloadable in their entirety.
# Archiving, preservation and access modality
## Consortium bodies meetings
The data in all the various formats will be stored in standard storage. It has
been mentioned in the previous section that no specialised storage or access
is necessary, as all datasets will be downloadable in their entirety.
All the information, including the descriptive metadata, when available, will
be available throughout the lifetime of the website, which is expected to be
in the public domain for a period of at least five (5) years after the
completion of the project. Due to the possibly large size of individual
downloadable files, the storage used will be based on a Cloud based services
and it is estimated (based on current prices) to be according to the table
below:
_Geographically Monthly Price Estimated Estimated Price per Estimated Price
over 5_
_Redundant Storage for per GB Storage month years_
_Disaster/Recovery requirements_
_purposes_
<table>
<tr>
<th>
_First 1 TB/Per month_
_Next 49 TB/Per month (1-50TB)_
</th>
<th>
€0.0358 per GB
</th>
<th>
1 TB (1024 GB)
</th>
<th>
€36.66 per month
</th>
<th>
€2,199.60
</th> </tr>
<tr>
<th>
€0.0352 per GB
</th>
<th>
1 TB (1024 GB)
</th>
<th>
€36.05 per month
</th>
<th>
€2,162.69
</th> </tr> </table>
_€4,362.29_
Please note that we estimate a maximum of no more than 2 TB of data to be made
publicly available. The cost of storage will be covered by the consortium
members.
Please note that the prices are based on the Microsoft Azure current pricing (
_http://azure.microsoft.com/enus/pricing/details/storage/_ ) .
Please note that some data may be stored under the facilities of the
consortium members that own them but will still be referenced via the website.
Data may also be shared with the community through public repositories.
# Conclusions
The Data Management Plan presented here describes the research data that will
be produced by each of the datagenerating tasks of the project and the way
that those will be made available. Information regarding data sharing,
archiving and preservation is also included.
Finally, a preliminary costing analysis has been made. It has to be clarified
that this Data Management Plan (DMP) is not a fixed document but evolves
during the lifespan of the project. The final version will be deliver on M42.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1358_NEXTFOOD_771738.md
|
# Executive summary
The following document is describing the data management cycle for the data
sets that will be developed, collected, generated, and processed by the
NextFOOD project. The document provided an overview of the data that will be
generated and processed during the project and/or after its completion. This
current document is the first version of a series and therefore is a starting
point and reference document for the researchers that are going to take part
in the NextFOOD project. As the project will progress more detailed and
elaborate versions of this document will be developed to incorporate the
developments of the projects as far data generation, development, processing,
and storage are concerned.
# Project abstract
NEXTFOOD will drive the crucial transition to more sustainable and competitive
agrifood and forestry systems development by designing and implementing
education and training systems to prepare budding or already practicing
professionals with competencies to push the green shift in our rapidly
changing society. NEXTFOOD will challenge the linear view of knowledge
transfer as a topdown process from research to advice and practice, and
support the transition to more learner-centric, participatory, action-based
and action-oriented education and learning in agrifood and forestry systems.
In several pioneering case studies, covering agrifood and forestry systems in
Europe, Asia and Africa, farmers solve real challenges related to
sustainability together with researchers, students and other relevant
stakeholders while developing both green technical skills and soft
collaborative competencies. NEXTFOOD will assure quality in research and
education by creating a peer-review system for evaluation of practice-oriented
research outputs focusing on sustainability and practical usefulness. In
addition, we will develop an international accreditation framework for
education and training in fields relevant to sustainable agrifood and forestry
systems. An innovative action research process will guide the NEXTFOOD
project’s development in a cyclical manner, ensuring that the research process
and actual case studies are everimproving. This will exemplify how practice-
oriented research can be instrumental to achieve: better collaboration between
university and society, more innovation in the agrifood and forestry systems
sector, and a progressive agrifood community ready to tackle complex
sustainability challenges of the 21st century.
# 1 Data summary
## Purpose of the data collection/generation
The NextFOOD Project's aim is to generate an innovative European science and
education roadmap for sustainable agriculture along the value chain from
research via fabrication into the application.
In order to achieve this goal, the project team will engage in several actions
to ensure a meaningful result. These actions are organized in 7 work packages
as follows
* WP1 Inventory of the skills needed for a transition to more sustainable agriculture, forestry and associated bio-value chains.
* WP2 Action research facilitation
* WP3 Future curriculum, education and training system
* WP4 Policy assessment and recommendations
* WP5 Quality assured knowledge transfer
* WP6 Communication, dissemination and exploitation
* WP7 Management
* WP8 Ethics requirements
For all the above Work packages and especially for the Work packages 1 to 5
data will be collected and produced during the NextFOOD project for
* research purposes
* development of new educational modules
* development of new educational material
* communication purposes within project partners and other interested bodies
## Relation to the objectives of the project
The collection of data is closely related to the objectives of the project
which are summarized as follows.
* Create an inventory of the skills and competencies needed for a transition to more sustainable agriculture, forestry and associated bio-value chains,
* Facilitate case studies to identify gaps and needs
* Test new relevant curricula and training methods
* Identify policy instruments that support the transition towards action-oriented, and practice-oriented learning methods
* Peer-review tools for evaluating the quality of the practice-oriented research
* Create a platform for knowledge sharing
An Annex will be provided by case studies’ leader that will summarizing the
Dataset of the project explaining in more detail the rationale of compiling
the dataset and its purpose within the scope of the project
## Types and formats of data generated/collected
The NextFOOD project will employ a number of Quantitative and Qualitative
methods in order to realize its objectives.
For Qualitative methods, data created in audio and/or video will be
transcribed and anonymized. Any hardcopy data products, such as participatory
design outputs will be documented in a proper electronic medium (e.g.
photographed).
Notetaking, journals etc. should be produced in an electronic textual data
form. When this is not possible, originals should be documented in a proper
electronic medium (e.g. photographed) and an electronic textual data form
should also be produced.
To ensure the greater possible data sharing, reuse and preservation will
adhere to the UK Data Service guidance on recommended formats summarized in
the following table.
_Table 1: Recommended file formats per type of data to be used in the NextFOOD
project_
### **Type of data Recommended formats Acceptable formats**
**Tabular data with** SPSS portable format (.por) proprietary formats of
statistical **extensive metadata** Delimited text and command packages: SPSS
(.sav), Stata **variable labels, code** ('setup') file (SPSS, Stata, SAS,
(.dta), MS Access (.mdb/.accdb) **labels, and defined** etc.) **missing
values** structured text or mark-up file of
metadata information, e.g. DDI
XML file
**Tabular data with** comma-separated values (.csv) delimited text (.txt) with
characters **minimal metadata** tab-delimited file (.tab) not present in data
used as **column headings,** delimited text with SQL data delimiters
**variable names** definition statements widely-used formats: MS Excel
(.xls/.xlsx), MS Access (.mdb/.accdb), dBase (.dbf),
OpenDocument Spreadsheet (.ods) **Geospatial data** ESRI Shapefile (.shp,
.shx, .dbf, ESRI Geodatabase format (.mdb) **vector and raster data** .prj,
.sbx, .sbn optional) MapInfo Interchange Format (.mif)
geo-referenced TIFF (.tif, .tfw) for vector data
CAD data (.dwg) Keyhole Mark-up Language (.kml)
tabular GIS attribute data Adobe Illustrator (.ai), CAD data
Geography Markup Language (.dxf or .svg)
(.gml) binary formats of GIS and CAD packages
**Textual data** Rich Text Format (.rtf) Hypertext Mark-up Language plain
text, ASCII (.txt) (.html)
eXtensible Mark-up Language widely-used formats: MS Word
(.xml) text according to an (.doc/.docx) appropriate Document Type some
software-specific formats:
Definition (DTD) or schema NUD*IST, NVivo and ATLAS.ti
**Image data** TIFF 6.0 uncompressed (.tif) JPEG (.jpeg, .jpg, .jp2) if
original
created in this format
GIF (.gif)
TIFF other versions (.tif, .tiff)
RAW image format (.raw)
Photoshop files (.psd)
BMP (.bmp)
PNG (.png)
Adobe Portable Document Format
(
PDF/A, PDF) (.pdf
)
**Audio data**
Free Lossless Audio Codec
(
FLAC) (.flac
)
MPEG-1 Audio Layer 3 (.mp3) if
original created in this format
Audio Interchange File Format (.aif)
Waveform Audio Format (.wav)
**Video data**
MPEG-4 (.mp4)
OGG video (.ogv, .ogg)
motion JPEG 2000 (.mj2)
AVCHD video (.avchd)
**Documentation**
**and**
**scripts**
Rich Text Format (.rtf)
PDF/UA, PDF/A or PDF (.pdf)
XHTML or HTML (.xhtml, .htm)
OpenDocument Text (.odt)
plain text (.txt)
widely-used formats: MS Word
(
.doc/.docx), MS Excel (.xls/.xlsx
)
XML marked-up text (.xml)
according to an appropriate DTD or
schema, e.g. XHMTL 1.0
__Existing data to be re-used_ _
No re-use of existing data is envisaged for the project so far.
## Origin of the data
The data for the NextFOOD project will be generated by the project team
through qualitative and quantitative methods. In particular, the core of the
project is utilizing a participatory action research protocol, especially in
WP2 case studies, in order to guide the NextFOOD project’s development in a
cyclical manner, ensuring that the research process and actual case studies
are ever-improving.
In particular, during the participatory action research the following types of
data collection are expected to be utilized:
participation, observation, recordkeeping, notetaking, surveying and
profiling, semistructured and informal interviewing, analysis of key reports,
running focus groups, photographing and videoing, and journaling.
In addition to the above data generated by the NextFOOD project will include
quantitative surveys through questionnaires.
__Expected size of the data_ _
The expected size of the data is not known.
## Data utility
Data generated from the NextFOOD project is expected to be useful to the
following entities:
* Educational Institutions
* Agricultural Advisory Services
* Policy and decision makers
* Agrifood industry
* Farmers, Farmers organisations
* Forestry Associations
* Research community
* Evaluators
* Project officers and project administration offices
# 1 Findable, Accessible, Interoperable, Reusable Data for the NextFOOD
Project
## 1.1 Making data findable, including provisions for metadata
### Discoverability of data and metadata provision
There are various disciplinary metadata standards, the NextFOODProject will
follow OpenAIRE Guidelines concerning the availability of medata.
All datasets will include metadata defining the what, where, when, why, and
how data of the data. Where appropriate, the data creators will assign Digital
Object Identifiers (DOIs).
For all data uploaded in a database/repository, the metadata will be in JSON-
format according to a defined JSON schema. The repository will allow for the
metadata exported in several standard formats such as MARCXML, Dublin Core,
and DataCite Metadata Schema (according to the OpenAIRE Guidelines).
Metadata will be provided in a Project and Dataset level and will include
sufficient information in order to link it to the research
publications/outputs, identify the funder and discipline of the research, and
will include appropriate keywords to help external and internal users to
locate the data.
Final responsibility to comply with EU regulations will be the responsibility
of each partner producing and providing data.
To the extent, possible surveys and coding will be shared between case
studies.
### Identifiability of data and standard identification mechanism
NextFOOD will use an Internal project Identifier for the data set to be
produced. This will follow the format:
WPNumber_TaskNumber__PartnerName_DataSubset_DatasetName_Version__Dat
eOfStorage, where the project name is NextFOOD, the PartnerName represents the
name of the data custodian (WP Lead/ Task Leader).
An example of this naming format would be:
WP2_T2.1_NMBU_Subset1_UserRequirements_V1.0_20.09.18
Data uploaded in repositories (e.g. Zenodo) will have their own Digital Object
Identifier issued by the Repository that will be utilized. Similarly, all
Journal Articles and the Monograph will have their own digital identifier
issued by the respective publisher.
### Naming conventions used
The following naming conventions should be used across all different case
studies and project partners and irrespective to whether documents, data sets,
etc. are finalized or not.
### Folders
Aside from the predefined folder names, new folders only shall be created
after approval by the project coordinator.
Preferably folder names shall be built of a number and a short descriptive
name in order to have a consistent sorting and that the folder can be found in
the same place at any time.
### Files
File names should be developed in a way such that on the one hand they provide
information about the content of the file, on the other hand, they should
enable a chronological order and the creation of variants.
Repetitions in files and/or folder names shall be avoided if possible.
For maximum compatibility between Windows/Linux/Mac etc. the following characters **must be avoided** in both, folder and file names: < > : " \ / | * ? .
Following characters **should be avoided** in both, folder and file names: [ ]
= % $ + , ; “-“ at the beginning of a name is not allowed.
The length of the entire path shall not exceed 256 characters including empty
spaces.
### Naming conventions for Survey Data and Coding schemes
Any surveys that will be conducted during the Project will include a detailed
explanation of the naming convention and a documentation file when needed.
Survey naming conventions should be common across cases to the extent that is
possible.
Qualitative data coding schemes should also be documented and follow clear
naming conventions documented to a separate file when needed. As in the case
of surveys coding schemes naming conventions should be common across cases, to
the extent that this is possible.
### Approach to search keywords
The NextFOOD project will assign keywords based on the UK Data Archive
_HASSET_ _Thesaurus_ and the _ELSST multilingual thesaurus_ _._
Outline the approach for clear versioning
### Versioning
No more than 10 versions of a file should be kept. Versions of previous
Substantive changes should be kept. When a new version due to substantive
changes (milestones versions) is produced minor versions of the previous
edition can be deleted.
Different versions should be identified by numbering v1.0, v.1.1 etc. and any
changes made to a file when a new version is created should be recorded by a
different colour. Moreover, relationships between items (e.g. code and data
files) should be recorded.
Lastly, files saved in different locations should be synchronized, and master
versions should be uploaded in the Project Sharing drive.
Version numbering in file names can be through discrete or continuous
numbering depending on minor or major revisions.
Example:
_Table 2: Versioning rules to be used for the NextFOOD project_
<table>
<tr>
<th>
**File name**
</th>
<th>
**Changes to file**
</th> </tr>
<tr>
<td>
Interviewschedule_1.0
</td>
<td>
Original document
</td> </tr>
<tr>
<td>
Interviewschedule_1.1
</td>
<td>
Minor revisions made
</td> </tr>
<tr>
<td>
Interviewschedule_1.2
</td>
<td>
Further minor revisions
</td> </tr>
<tr>
<td>
Interviewschedule_2.0
</td>
<td>
Substantive changes
</td> </tr> </table>
### Standards for metadata creation
The NextFOOD project strongly suggests to the partners to utilize the DDI
standard in the development of their surveys. More Information about the DDI
can be found _here_ .
## 1.2 Making data openly accessible
### Data that will be made openly available
The NextFOOD project will facilitate the sharing of results and deliverables,
both within and beyond the consortium. Results will be widely shared with the
interested communities, including but not limited to the scientific community,
policy and decision makers through publications in scientific journals and
presentations at conferences, as well as through open access data
repositories. Overall an open access policy will be applied, following the
rules outlined in the Grant and Consortium Agreements.
All data will be considered by default openly available, with the exception of
datasets that include personalized data. In the latter case data should be
anonymized before being considered to be openly available.
All data will be made available for verification and re-use unless the task
leader can justify why data cannot be made openly accessible.
The Steering Committee will assess the reasoning of the justification and make
the final decision based on examination of the following elements regarding
confidentiality of datasets:
1. Commercial sensitivity of datasets
2. Data confidentiality for security reasons
3. Conflicts between open-access rules and national and European legislation (e.g.data protection regulations).
4. Sharing data would jeopardise the aims of the project
5. Other legitimate reasons, to be validated by the IPR Committee
### Availability guarantees
All NextFOOD project datasets will be made available in an online open
repository (e.g. Zenodo) including the relevant metadata to identify the
project, funder, scope etc. of the project unless the Steering committee
accepts that there is sufficient reason and justification not to.
### Methods or software tools needed to access the data and documentation
about the software
NextFOOD will ensure that it will follow the UK archive guidelines on
recommended file formats to ensure accessibility. The following documentation
on a study and Data level will be also provided for this reason.
### Study-Level
On a study-level the following documentation should be provided:
* research design and context of data collection: project history, aims, objectives, hypotheses, investigators and funders
* data collection methods, data collection protocols, sampling design, sample structure and representation, workflows, instruments used, hardware and software used, data scale and resolution, temporal coverage and geographic coverage, and digitisation or transcription methods used
* structure of data files, with a number of cases, records, files and variables, as well as any relationships among such items
* secondary data sources used and provenance, for example, for transcribed or derived data
* data validation, checking, proofing, cleaning and other quality assurance procedures carried out, such as checking for equipment and transcription errors, calibration procedures, data capture resolution and repetitions, or editing, proofing or quality control of materials
* modifications made to data over time since their original creation and identification of different versions of datasets
* for time series or longitudinal surveys, changes made to methodology, variable content, question text, variable labelling, measurements or sampling, and how panels were managed over time and between waves
* information on data confidentiality, access and any applicable conditions of use
* publications, presentations and other research outputs that explain or draw on the data
Important data documentation should include original questionnaires,
interviewer instructions, interview topic guides or experimental protocols.
### Data-Level
On a Data-level for Survey and Transcription data partners should follow the
guidelines of the UK Data Survey available _here_ and _here_ respectively.
Based on the acceptable file formats SPSS, Stata, NVivo, etc software might be
needed to access the original data. Nevertheless, partners are encouraged to
provide DDI XML data and metadata.
The above should be also documented in specific "readme" files in the
respective folder.
Specify where the data and associated metadata, documentation and code are
deposited
The NextFOOD project will utilize the Zenodo or a similar open Repository for
the data that will make available publicly.
Original non-anonymized data or data that is decided not to be published will
be stored under the responsibility of the task leader in a closed repository
that will adhere to all the necessary legal and ethical requirements.
### Access provision for restricted datasets
All partners will have access to data produced by the NextFOOD project
(internal), with the exception of on-anonymized data.
All published data will be openly accessible to all internal and external data
requests.
In the case of restricted access, the steering committee will assess external
data requests or internal data requests for non-anonymized access.
Additionally, to the above task leaders should set an embargo period for the
data produced by the project that cannot be longer than 24 months after the
completion of the project.
1.3 Making data interoperable
### Interoperability of Data
The Datasets produced by the NextFOOD project will have high interoperability,
taking into account the type and discipline of the project.
The project will be using the UK Data Service Guidelines concerning data and
metadata vocabularies, partners are expected to follow the recommended file
formats of the UK data archive and are encouraged to follow the DDI standards
to assess interoperability.
### Standard vocabularies
To the extent, possible partners are expected to use standard vocabulary for
all data types present in the datasets.
In the case that this won't be possible partners should provide adequate
documentation to allow interoperability.
## 1.4 Data re-use and licenses
### Data licensing and Permissions
All project data will follow a Creative Commons Attribution Non-Commercial
licenses.
The task leader may opt out proposing a different license by providing the
necessary reasoning and justification to the Steering Committee.
### Data availability and disclosure
All data sets should be made public before the finalization of the project in
an open repository.
In the case of embargoed data, the embargo time will not exceed a period of
24th months after the completion of the project.
### Data usability by third parties
Datasets open and usable by third parties, since they are going to be uploaded
under a Creative Commons Attribution Non-Commercial license.
In some cases, task leaders may opt out of publishing and publicly releasing
data under the above-mentioned license. In this case, sufficient justification
will be provided and the Data management Plan will be amended accordingly.
Non-open data will also be reusable (except non-anonymized data), provided
that the data request is reasonable and justifiable.
__Data quality assurance processes_ _
### **Survey data**
Surveys should be developed collectively with the participation of all
relevant partners that will ensure through internal peer review the quality of
the planning, data collection and documentation of the data.
A common survey and metadata registry in a data-level will be developed for
each survey to the extent that a common survey can be applicable in all the
different case studies.
All surveys common or not should provide sufficient documentation on a Data-
Level (as described above).
### **Transcripts and participatory research data**
Similarly to the common developed protocols for survey research, specific
guidelines, coding schemes etc will be developed for transcripts and
participatory research data. All of the above should be common across all case
studies of the projects to the extent possible.
For each dataset sufficient documentation should be provided on a Data-Level
(as described above).
The Data Quality assurance processes are the responsibility of each Task
Leader.
### **Length of time for which the data will remain re-usable**
Published data in the open repository will remain re-usable for as long as the
repository specifies.
# 2 Allocation of resources
No specific costs are allocated for making the NextFOOD data FAIR. Each
partner will be responsible for the data management costs of the data that
they will produce.
## Responsibilities for data management in your project
**The project coordinator** will be responsible for keeping partners
accountable as far as data management is being concerned.
The steering committee will be responsible to assess whether data should be
deemed public or not, as well as under which licence the data will fall.
**Task leaders** will be responsible to ensure that
* sufficient documentation is produced for each dataset
* the quality assurance processes are being implemented
* the datasets are uploaded to public repositories or for non-public datasets stored safely in the chosen repository by the organization.
**Case study leaders** will be responsible for the collection, documentation,
as well as legal and ethical rules, are being followed as they are specified
in the respective documents
**In the case of external contractors** concerning data collection, data
entry, transcribing, processing or analysis, the organization making the
subcontracting must assure that all the legal and ethical rules are being
followed as they are specified in the respective documents.
A specific folder structure for internal data storage will be developed by the
project coordinator with the support of the AFS.
## Costs and potential value of long-term preservation
There will not be additional costs for the project for a long-term
preservation of the data.
Data generated from the NextFOOD project is expected to be valuable to the
following entities:
* Educational Institutions
* Agricultural Advisory Services
* Policy and decision makers
* Agrifood industry
* Farmers, Farmers organisations
* Forestry Associations
* Research community
* Evaluators
* Project officers and project administration offices
# 3 Data security
## Data recovery, secure storage and transfer of sensitive data
Case study leaders and task leaders should keep a back-up of all data and
documentation that they produce.
They are also responsible to upload and update relevant data in the internal
repository of the project.
We recommend that any storage should involve at least two different forms of
storage, for example on a hard drive and on DVD and that the data integrity
should be checked periodically.
Personal information should be removed from data files and stored separately
under more stringent security measures. Any digital files or folders which
contain sensitive information and data should be encrypted.
Cloud data storage should be avoided for high-risk information such as files
that contain personal or sensitive information, information that is covered by
the law. Encryption should be used to safeguard data files to a certain
degree, but partners should keep in mind that in some cases it would still not
meet the requirements of data protection legislation.
# 4 Ethical aspects
Ethical issues concerning the NextFOOD project are tackled in the respective
deliverables.
Additional information concerning ethical aspects such as compliance etc. can
be found _here_ .
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1369_VALITEST_773139.md
|
# Introduction
The VALITEST project [ _https://www.valitest.eu/_ ] aims at improving the
diagnostic of plant pests by i) producing validation data on the performance
of the tests that are used in diagnostic,
ii) harmonising further processes and iii) enlarging/triggering enlargement of
the commercial offer for reliable detection and identification tests.
To achieve those objectives, a significant amount of data will be collected,
processed and generated. According to the European Commission (EC), “
__research data_ _ _is information (particularly facts or numbers) collected
to be examined and considered, and to serve as a basis for reasoning,
discussion, or calculation_ ”. In general terms, VALITEST data will follow the
“ __FAIR_ _ ” principles, meaning “ _Findable, Accessible, Interoperable and
Re-usable_ ”. The FAIR principles will ensure sound management of data,
leading to knowledge discovery and innovation, and to subsequent data and
knowledge integration and reuse. The data will be made findable and accessible
within the consortium, and to the broader research community, stakeholders and
policy makers. Also, data have to be compliant with national and European
ethic-legal frameworks, such as the General Data Protection Regulation (GDPR,
Regulation (EU) 2016/679), which is applicable since May 2018. This Data
management plan (DMP) describes the data management life cycle for all data to
be collected, processed and/or generated by the project. It includes
information on the handling of research data both during and after the end of
the project, the nature of the data, the methodology and standards applied,
whether data will be shared or made open access, and how the data will be
curated and preserved.
The DMP is intended to be a living document, and can be further modified or
detailed during the project. The information can be made available on a finer
level of granularity through updates as the implementation of the project
progresses and when significant changes occur. Those changes might include new
data or changes in consortium policies. At the very least, the DMP will be
updated in the context of the periodic evaluation/assessment of the project,
but the implementation of the DMP at project level will also be part of the
annual reporting.
The VALITEST DMP is structured according to the H2020 templates. It includes 6
components:
* Data Summary
* FAIR data
* Allocation of resources
* Data security
* Ethical aspects
* Important topics requiring progress and/or update in future version of the DMP.
# Data Summary
1.1 What is the purpose of the data collection/generation and its relation to
the objectives of the project?
The VALITEST project [ _https://www.valitest.eu/_ ] aims at improving the
diagnostic of plant pests by i) producing validation data on the performance
of the tests that are used in diagnostic,
ii) harmonising further processes and iii) enlarging/triggering enlargement of
the commercial offer for reliable detection and identification tests.
The project will include two rounds of test performance studies (TPS) to
produce validation data, i.e. data concerning the performance of diagnostic
tests while used by several laboratories on a panel of samples prepared to be
as representative as possible of the potential samples.
To maximise the impact of the project, calls for interest will be organised to
include in the validation programme, kits from suppliers outside the
consortium and allow participation to the TPS of voluntary proficient
laboratories.
For a better understanding of the demands for current and future testing
options, identified stakholders will be contacted to collect their views.
Current harmonised procedures in Plant Health for validation and organisation
of TPS will be improved by including appropriate statistical approaches and by
adapting the process for new promising technologies.
The management of the project will also require the collection of data
concerning the partners.
1.2 What types and formats of data will the project generate/collect?
Several kinds of data are foreseen to be collected or generated during the
project:
* Data concerning the partners;
* Validation data;
* Data concerning the stakeholders and their views on the diagnostic market;
* Data concerning the interest of kit suppliers outside of the consortium;
* Data concerning reference material; Reports and publications.
Data formats will be selected with the view to facilitate data storage and
transfer. Therefore, used data format will be machine-readable, but also
human-readable using common software. Additionally, the management team
recommends the use of non-proprietary formats, if possible.
1.3 Will you re-use any existing data and how?
The project will generate validation data but will also collect existing
validation data in order to enrich an existing data-base dedicated to
validation data. Existing data will not be processed during the project.
1.4 What is the origin of the data?
Data will be either generated during the project or collected in the context
of surveys and of calls for interest. Validation data generated during the
project will correspond to results obtained by several laboratories performing
the same experiments. Data will be collected and analysed by the laboratory in
charge of the test performance study.
1.5 What is the expected size of the data?
The actions planned during the VALITEST project should not require the storage
and handling of big data sets. The exact data size will be evaluated by the
Consortium partners during the course of the project.
1.6 To whom might it be useful ('data utility')?
According to the domain of expertise, data generated within the VALITEST
project can be useful to:
* Scientific community;
* Industries involved plant health diagnostic;
* Inspection services, National plant protection services;
* Policy and decision makers, governmental authorities;
* International and regional organisations involved in plant health, such as EPPO, IPPC, APPPC, CAHFSA, …; Farmers/growers, landowners, agricultural advisors, breeders, …; Consumers and society.
It is the objective of the Consortium to provide most of deliverables to the
widest public possible; however, restrictions in the use of data will apply
especially for intellectual property or ethical reasons. When restrictions
will apply, the rational for such restrictions should be provided.
# FAIR data
Through the life cycle of the project, the FAIR principles will be followed as
far as possible, while paying attention to the non-disclosure of data
susceptible to compromise the quality trademark of SMEs and ensuring
compliance with national and European ethic-legal framework. The FAIR
component of the DMP still comprises points to clarify, which will be
addressed during the course of the project.
2.1 Making data findable, including provisions for metadata
Data discoverability can be obtained by different means, which include:
* Providing data visibility through a communication system (e.g. social media, website);
* Providing online links between research data and related publications or other related data;
* Providing open access (e.g. open data repository);
* Providing data documentation in a machine-readable format;
* Using metadata standards or metadata models;
* Providing access through application;
* Providing online data visualisation/analysis tool for the data, to help researchers to explore data in order to determine its appropriateness for their purposes.
2.1.1 Discoverability of data
Are the data produced and/or used in the project discoverable with metadata,
identifiable and locatable by means of a standard identification mechanism
(e.g. persistent and unique identifiers such as Digital Object Identifiers)?
The number of databases to store data and documents is increasing with the
expansion of the open access/open data approach. This brings the intrinsic
problem that access to information is fragmented (different locations) which
ironically has the counter-effect to hinder use and re-use of open data. In
order to ensure visibility and accessibility of data, the Digital Research
Object Portal (DROP) hosted within EPPO and maintained by Euphresco will be
used to reference the open data and documents produced by the VALITEST
consortium. The Portal constitutes a unique entry point to ease the retrieval
of information and direct users towards the various infrastructures where the
actual data (and documents) will be hosted. All the deliverables are also
listed on the VALITEST website ( _https://www.valitest.eu/index_ ) . Once
available, links will be available between the VALITEST website and the
appropriate open repositories where the deliverables or datasets will be
submitted. Some repositories, such as Zenodo, provide also social media links.
Scientific publication will be advertised using the website and social media,
each will be identifiable and locatable by means of a Digital Object
Identifier (DOI).
According to the EC, _metadata_ is a systematic method for describing such
resources and thereby improving access to them. In other words, it is data
about data. Metadata provides information that makes it possible to make sense
of data (e.g. documents, images, datasets), concepts (e.g. classification
schemes) and real-world entities (e.g. organisations, places). Different types
of metadata exist for different purposes, such as descriptive metadata (i.e.
describing a resource for purposes of discovery and identification),
structural metadata (i.e. providing data models and reference data) and
administrative metadata (i.e. providing information to help management of a
resource).
The Dublin Core Metadata Initiative is providing best practices in metadata.
All the data collected or generated during the project will be described using
the Dublin Core interoperable metadata standards. Furthermore, as stated in
the grant agreement of the project, the metadata will always include all of
the following:
* the terms “European Union (EU)” and “Horizon 2020”;
* the name of the action, acronym and grant number;
* the publication date, and length of embargo period if applicable, and a persistent identifier.
Concerning validation data, specific metadata have been developed by the
European and Mediterranean Plant Protection Organisation Panel on Diagnostics
and Quality Assurance, these will systematically be used.
Finally, some criteria will be ascertained to ensure best practice in metadata
management:
* Availability: metadata need to be stored where they can be accessed and indexed so they can be found;
* Quality: metadata need to be of consistent quality, so users know that it can be trusted;
* Persistence: metadata need to be kept over time;
* Open License: metadata should be available under a public domain license to enable their reuse.
2.1.2 Naming conventions and clear versioning
Each deliverable can be identified by a unique number:
D.work_package_number.deliverable_number. When applicable a versioning is
used. Most documents made public will have a unique persistent identifier such
as Digital Object Identifier. DOI are automatically provided by most
repositories. If needed, data managers will purchase DOI numbers to reference
some outputs of the project.
2.1.3 Will search keywords be provided that optimize possibilities for re-use?
To facilitate the queries by keywords, metadata of the digital objects
generated during the projects must include the term “VALITEST”. Other keywords
will belong to the harmonised vocabulary used in the domain 1 .
2.2 Making data openly accessible
According to the _H2020 online manual_ , open access refers to the practice
of providing online access to scientific information that is free of charge to
the end-user and reusable. In the context of research and innovation,
'scientific information' can mean: peer-reviewed scientific research articles
(published in scholarly journals), or research data (data underlying
publications, curated data and/or raw data). Open access to scientific
publications means free online access for any user. The costs of open access
publishing are eligible, as stated in the Grant Agreement. Open access to
research data refers to the right to access and reuse digital research data
under the terms and conditions set out in
1
ISPM 5 - Glossary of phytosanitary terms; Produced by the Secretariat of
International Plant Protection Convention, Adopted 2018, Published 2018;
https://www.ippc.int/static/media/files/publication/en/2018/06/ISPM_05_2018_En_Glossary_2018-05-
20_PostCPM13_R9GJ0UK.pdf
the Grant Agreement. Users should normally be able to access, mine, exploit,
reproduce and disseminate openly accessible research data free of charge.
2.2.1 Which data produced and/or used in the project will be made openly
available as the default? If certain datasets cannot be shared (or need to be
shared under restrictions), explain why, clearly separating legal and
contractual reasons from voluntary restrictions.
By default, the data and metadata of VALITEST will be made openly available.
However, three types of restrictions will apply:
* Open access is incompatible with rules on protecting personal data: protection of the personal right needs to be ascertained either by avoiding open access to sensitive and personal data, or by anonymising the data. Deliverable D9.1 (confidential) specifies the procedures implemented for personal data collection, storage, protection, retention and destruction.
* Open access is incompatible with the obligation to protect results that can reasonably be expected to be commercially or industrially exploited. The management board will identify project results with exploitation potential or commercial value or results that are costly and difficult to replicate. The management board will also assess the best options for exploiting these results (e.g. opt for disclosure and publication of results or protection through patent or other forms of IPR), consulting patent attorneys, IP specialist agents and officers at Knowledge Transfer Departments, if required. During every general assembly meeting, a special session on IP issues will be scheduled to enable open discussions and joint decisions on the best strategies for managing and exploiting the project results. Hence, protection of the interests of all the involved parties will be ensured. This will enable the exploitation strategy by all parties to be reviewed regularly and to make sure any relevant result is on route for exploitation (directly by the partners or indirectly by third parties) with appropriate terms and conditions for all project partners. Decisions relative to data management needed between meetings will be approved through electronic correspondence.
* Open access data may compromise the quality trademark of partners. Only the name of the best performing kit(s) will be made available. The performance characteristics of marketed tools providing unsatisfactory performance levels will be systematically anonymised before being made available.
2.2.2 How will the data be made accessible (e.g. by deposition in a
repository)?
Within the consortium, the deliverables are accessible on a restricted access
platform. For the public and other stakeholders, the deliverables are listed
on the VALITEST website. Some deliverables will be kept confidential, but most
will be made publicly available. Public deliverables will be downloadable from
the project website.
Validation data will be hosted by the European and Mediterranean Plant
Protection Organisation on the Section 'validation data for diagnostic tests'
of the ‘EPPO Database on Diagnostic Expertise’ ( _https://dc.eppo.int/_ ) .
This database already includes validation data for diagnostic tests for
regulated pests, generated by various laboratories in EPPO member countries.
The validation data are presented according to a common format developed by
the EPPO Panel on Diagnostics and Quality Assurance. Validation data can be
submitted by any laboratory registered in the EPPO database on diagnostic
expertise. At this point, this database does not comply with Open Data policy:
it is not referenced and the data can only be visualised and downloaded as a
PDF file. During the project, the database will be referenced and the data it
contains will be made findable, accessible, interoperable and reusable.
Other data-sets will be uploaded in machine-readable format on one single
OpenAire compliant repository (Zenodo or HAL, still to be determined by the
management board – see more information on these repositories below) from
which data can be found through a web browser and downloaded by a potential
interested user.
Regarding peer-reviewed publications, the VALITEST partners will provide at
the minimum a ‘green’ open access; they will archive the publications on an
online OpenAire compliant repository and ensure open access within a maximum
of six months. The ‘gold’ open access should be preferred, in this case, the
article is immediately provided in open access by the publisher.
2.2.3 What methods or software tools are needed to access the data?
Only standard software, e.g. web browsers, pdf-file readers, and text readers,
or open licence free software, e.g. ‘R’ will be needed.
2.2.4 Where will the data and associated metadata, documentation and code be
deposited? Preference should be given to certified repositories which support
open access where possible.
Validation data will be integrated to the ‘EPPO Database on Diagnostic
Expertise’ ( _https://dc.eppo.int/_ ) . During the project, the database
will be referenced and the data it contains will be made findable, accessible,
interoperable and reusable.
Concerning other data sets and ‘green’ open access scientifique articles, two
options are considered at this stage: Zenodo ( _https://zenodo.org/_ ) and
HAL ( _https://hal.archives-ouvertes.fr/_ ) . Both are online OpenAire
compliant repositories. The management board will have to decide which
repository should be used.
2.2.5 Have you explored appropriate arrangements with the identified
repository?
Arrangements already exist between the partner ANSES coordinating the project
and the repository HAL ( _https://halanses.archives-ouvertes.fr/_ ) .
Appropriate arrangements will be considered when repository will have been
chosen by the management board.
2.2.6 If there are restrictions on use, how will access be provided?
As described in 2.2.1, three types of sensitive data (including personal data)
are identified. These, will be managed following the GDPR requirements. Access
to these sensitive data will be granted by the management board. The data
processing will have to correspond to one of the uses announced to the subject
during the collection of the data. Data will be transferred by electronic mail
using a format including metadata. Deliverable D9.2 (confidential) provides
more details on the procedures implemented for personnel data management.
2.2.7 Is there a need for a data access committee?
Data issues will systematically be discussed in the general assembly meetings.
The access to sensitive data will be granted by the management board of the
project.
2.3 Making data interoperable
2.3.1 Are the data produced in the project interoperable, that is allowing
data exchange and re-use between researchers, institutions, organisations,
countries, etc. (i.e. adhering to standards for formats, as much as possible
compliant with available (open) software applications, and in particular
facilitating re-combinations with different datasets from different origins)?
All the data collected or generated during the project and made public will be
available on machine readable format using common or open licence free
software, described using appropriate metadata, except for the personal and
sensitive data as described in deliverable 9.1 (confidential).
2.3.2 What data and metadata vocabularies, standards or methodologies will you
follow to make your data interoperable?
For most of the data collected or generated during the project, used metadata
will follow the Dublin Core interoperable metadata standards. For the specific
validation data, used metadata will follow the standards developed by the
European and Mediterranean Plant Protection Organisation Panel on Diagnostics
and Quality Assurance.
2.3.3 Will you be using standard vocabularies for all data types present in
your data set, to allow inter-disciplinary interoperability?
To allow inter-disciplinary interoperability, all the documents and data
generated during the project will use the standard vocabulary developed by the
International Plant Protection Convention (IPPC) to provide a harmonised
internationally agreed vocabulary associated with phytosanitary measures 2 .
2.4 Increase data re-use (through clarifying licences)
2.4.1 How will the data be licensed to permit the widest re-use possible?
For public data, the reuse of the data will be possible through the open
repositories where they will be stored.
2.4.2 When will the data be made available for re-use? If an embargo is sought
to give time to publish or seek patents, specify why and how long this will
apply, bearing in mind that research data should be made available as soon as
possible.
The specific decision on an embargo for research data will be taken by the
management board. Scientific research articles should have an open access at
the latest on publication if in an Open Access journal, or within 6 months of
publication. For research data, open access should by default be provided when
the associated research paper is available in open access.
2.4.3 Are the data produced and/or used in the project useable by third
parties, in particular after the end of the project? If the re-use of some
data is restricted, explain why.
Most of the data collected or generated during the project will be available
from open repositories, and therefore reusable by third parties, even after
the end of the project. For ethical and legal reasons, personal or sensitive
data will not be made public (deliverable 9.1 - confidential). Data concerning
intellectual property will be discussed between relevant partners, and
decision will be taken according to the European and national rules.
2.4.4 How long is it intended that the data remains re-usable?
Regarding data stored on an OpenAIRE compliant public repository, all files
stored within the repository shall be stored after the project to meet the
requirements of good scientific practice.
For data stored on other repositories, researchers, institutions, journals and
data repositories have a shared responsibility to ensure long-term data
preservation. Partners must commit to preserving their datasets, on their own
institutional servers, for at least five years after publication. If, during
that time, the repository to which the data were originally submitted
disappears or experiences data loss, the partners will be required to upload
the data to another repository and publish a correction or update to the
original persistent identifier, if required.
2.4.5 Are data quality assurance processes described?
For the VALITEST consortium, it is essential to provide good quality data.
This will be ensured through various methods. Firstly, partners have existing
data quality assurance processes, which are described in their quality manual.
Secondly, publications will be disseminated using peer-reviewed journals, and
similarly, research data will be deposited on repositories providing curation
system appropriate to the data.
2
ISPM 5 - Glossary of phytosanitary terms; Produced by the Secretariat of
International Plant Protection Convention, Adopted 2018, Published 2018;
https://www.ippc.int/static/media/files/publication/en/2018/06/ISPM_05_2018_En_Glossary_2018-05-
20_PostCPM13_R9GJ0UK.pdf
# Allocation of resources
3.1 What are the costs for making data FAIR in your project?
Costs directly associated to FAIR data management have been included within
the description of the different tasks of the project.
3.2 How will these be covered?
Costs related to open access to research data are eligible as part of the
Horizon 2020 grant (if compliant with the Grant Agreement conditions).
3.3 Who will be responsible for data management in your project?
The management team (ANSES, [email protected]_ ) will ensure best practices
and FAIR principles in the data management of the project. Each partner will
be responsible for managing the data it uses, processes or generates in the
project. In relation with their data protection officer, each partner has
appointed a data manager ensuring the respect of DMP principles and involved
in personal data protection.
3.4 Are the resources for long term preservation discussed (costs and
potential value, who decides and how what data will be kept and for how long)?
As an intergovernmental organisation responsible for cooperation in plant
health within the Euro-Mediterranean region, EPPO will ensure the long term
preservation and availability of the validation data on its own resources.
# Data security
4.1 What provisions are in place for data security (including data recovery as
well as secure storage and transfer of sensitive data)?
Partners will store and process personal data according to their internal
procedures. The security of data will be ensured by means of appropriate
technical and organisational measures.
4.2 Is the data safely stored in certified repositories for long term
preservation and curation?
Validation data will be stored on EPPO servers located in a French datacenter.
The databases are located on servers independent of the web platform. Access
to the servers is only possible from a secure network (dark fiber from EPPO HQ
or VPN). Access to raw data is only possible from the accounts of EPPO IT
Officers, through an authentication mechanism. The servers are monitored and
supervised 24/7 by a service provider. The provider may under no circumstances
access the data without the agreement of EPPO (the servers belong to EPPO).
Other public data will be stored on one single certified repository (HAL or
Zenodo).
# Ethical aspects
Deliverables D9.1 (personal data management) and D9.2 (ethical standards and
guidelines in non EU countries) specifically deal with ethical aspects of the
project. These documents include the management of personal and sensitive data
and the management of data imported to, or exported from EU.
# Important topics requiring progress and/or update in future versions of the
DMP
* Decision on the repository used for long term storage of data made public
* Consider appropriate arrangements with the selected repository
* Arrange DOI assignment to all documents and data sets (even when not uploaded on a repository automatically providing the DOI)
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1370_VALITEST_773139.md
|
# Introduction
The VALITEST project [ _https://www.valitest.eu/_ ] aims at improving the
diagnostics of plant pests by i) producing validation data on the performance
of the tests that are used in diagnostics,
ii) harmonising further processes and iii) enlarging/triggering enlargement of
the commercial offer for reliable detection and identification tests.
To achieve those objectives, a significant amount of data will be collected,
processed and generated. According to the European Commission (EC), “
__research data_ is information (particularly facts or numbers) collected to
be examined and considered, and to serve as a basis for reasoning, discussion,
or calculation _ ”. In general terms, VALITEST data will follow the “ __FAIR_
_ ” principles, meaning “ _Findable, Accessible, Interoperable and Re-usable_
”. The FAIR principles will ensure sound management of data, leading to
knowledge discovery and innovation, and to subsequent data and knowledge
integration and reuse. The data will be made findable and accessible within
the consortium, and to the broader research community, stakeholders and policy
makers. Also, data must be compliant with national and European ethic-legal
frameworks, such as the General Data Protection Regulation (GDPR, Regulation
(EU) 2016/679), which has been applicable since May 2018. This Data management
plan (DMP) describes the data management life cycle for all data to be
collected, processed and/or generated by the project. It includes information
on the handling of research data both during and after the end of the project,
the nature of the data, the methodology and standards applied, whether data
will be shared or made open access, and how the data will be curated and
preserved.
The DMP is intended to be a living document and can be further modified or
detailed during the project. The information can be made available on a finer
level of granularity through updates as the implementation of the project
progresses and when significant changes occur. These changes might include new
data or changes in consortium policies. At the very least, the DMP will be
updated in the context of the periodic evaluation/assessment of the project,
but the implementation of the DMP at project level will also be part of the
annual reporting.
The VALITEST DMP is structured according to the H2020 templates. It includes 6
components:
* Data Summary
* FAIR data
* Allocation of resources
* Data security
* Ethical aspects
* Important topics requiring progress and/or update in future version of the DMP.
# Data Summary
## What is the purpose of the data collection/generation and its relation to
the objectives of the project?
The VALITEST project [ _https://www.valitest.eu/_ ] aims at improving the
diagnostics of plant pests by i) producing validation data on the performance
of the tests that are used in diagnostics,
ii) harmonising further processes and iii) enlarging/triggering enlargement of
the commercial offer for reliable detection and identification tests.
The project will include two rounds of test performance studies (TPS) to
produce validation data, i.e. data concerning the performance of diagnostic
tests when used by several laboratories on a panel of representative samples.
To maximise the impact of the project, calls for interest will be organised to
include kits from suppliers outside the consortium in the validation programme
and to allow participation in the TPS from voluntary proficient laboratories.
For a better understanding of the demands for current and future testing
options, identified stakeholders will be contacted to collect their views.
Current harmonised procedures in Plant Health for validation and organisation
of TPS will be improved by including appropriate statistical approaches and by
adapting the process for new promising technologies.
The management of the project will also require the collection of data
concerning the partners.
## What types and formats of data will the project generate/collect?
Several kinds of data are foreseen to be collected or generated during the
project:
* Data concerning the partners;
* Data concerning the participants to the TPS outside of the consortium;
* Validation data;
* Data concerning the stakeholders and their views on the diagnostic market;
* Data concerning the interest of kit suppliers outside of the consortium; Data concerning reference material; Reports and publications.
Data formats will be selected with the view to facilitate data storage and
transfer. Therefore, used data format will be machine-readable, but also
human-readable using common software. Additionally, the management team
recommends the use of non-proprietary formats, if possible.
## Will you re-use any existing data and how?
The project will generate validation data but will also collect existing
validation data in order to enrich an existing database dedicated to
validation data. Existing data may be processed during the project in order to
adapt these data to the improvements of the database that will be made in the
course of the project.
## What is the origin of the data?
Data will be either generated during the project or collected in the context
of surveys and of calls for interest. Validation data generated during the
project will correspond to results obtained by several laboratories performing
the same experiments. Data will be collected and analysed by the laboratory in
charge of the test performance study.
## What is the expected size of the data?
The actions planned during the VALITEST project should not require the storage
and handling of big data sets. The exact data size will be evaluated by the
Consortium partners during the course of the project.
## To whom might it be useful ('data utility')?
According to the domain of expertise, data generated within the VALITEST
project can be useful to:
* Scientific community;
* Industries involved plant health diagnostics;
* Inspection services, National plant protection organisations;
* Policy and decision makers, governmental authorities;
* International and regional organisations involved in plant health, such as IPPC, APPPC, CAHFSA, EPPO…; Farmers/growers, landowners, agricultural advisors, breeders, …; Consumers and society.
# FAIR data
Through the life cycle of the project, the FAIR principles will be followed as
far as possible, while paying attention to the non-disclosure of data
susceptible to compromise the quality trademark of SMEs and ensuring
compliance with national and European ethic-legal framework.
## Making data findable, including provisions for metadata
Data discoverability can be obtained by different means, which include:
* Providing data visibility through a communication system (e.g. social media, website);
* Providing online links between research data and related publications or other related data;
* Providing open access (e.g. open data repository);
* Providing data documentation in a machine-readable format;
* Using metadata standards or metadata models;
* Providing access through applications;
* Providing online data visualisation/analysis tools for the data, to help researchers to explore data in order to determine its appropriateness for their purposes.
2.1.1 Discoverability of data
Are the data produced and/or used in the project discoverable with metadata,
identifiable and locatable by means of a standard identification mechanism
(e.g. persistent and unique identifiers such as Digital Object Identifiers)?
The number of databases to store data and documents is increasing with the
expansion of the open access/open data approach. This means that access to
information is fragmented (different locations) which ironically has the
countereffect to hinder use and re-use of open data. In order to ensure
visibility and accessibility of data, the Digital Research Object Portal
(DROP) hosted within EPPO and maintained by Euphresco will be used to
reference the open data and documents produced by the VALITEST consortium. The
Portal constitutes a unique entry point to ease the retrieval of information,
it will direct users towards the various infrastructures where the actual data
(and documents) will be hosted. All the deliverables are also listed on the
VALITEST website ( _https://www.valitest.eu/index_ ). Links will be available
between the VALITEST website and the appropriate open repositories where
deliverables or datasets will be submitted. The chosen repository (Zenodo),
provides also social media links. Scientific publications will be advertised
using the website and social media, each will be identifiable and locatable by
means of a Digital Object Identifier (DOI).
According to the EC, _metadata_ provide a systematic method for describing
such resources and thereby improving access to them. In other words, it is
data about data. Metadata provides information that makes it possible to make
sense of data (e.g. documents, images, datasets), concepts (e.g.
classification schemes) and real-world entities (e.g. organisations, places).
Different types of metadata exist for different purposes, such as descriptive
metadata (i.e. describing a resource for purposes of discovery and
identification), structural metadata (i.e. providing data models and reference
data) and administrative metadata (i.e. providing information to help
management of a resource).
The Dublin Core Metadata Initiative is providing best practices in metadata.
All the data collected or generated during the project will be described using
the Dublin Core interoperable metadata standards. Furthermore, as stated in
the grant agreement of the project, the metadata will always include all of
the following:
* the terms “European Union (EU)” and “Horizon 2020”;
* the name of the action, acronym and grant number;
* the publication date, and length of embargo period if applicable, and a persistent identifier.
Concerning validation data, specific metadata have been developed by the
European and Mediterranean Plant Protection Organisation Panel on Diagnostics
and Quality Assurance, these will systematically be used.
Finally, some criteria will be ascertained to ensure best practice in metadata
management:
* Availability: metadata need to be stored where they can be accessed and indexed so they can be found;
* Quality: metadata need to be of consistent quality, so users know that it can be trusted;
* Persistence: metadata need to be kept over time;
* Open License: metadata should be available under a public domain license to enable their reuse.
2.1.2 Naming conventions and clear versioning
Each deliverable can be identified by a unique number:
D.work_package_number.deliverable_number. Where applicable versioning is used.
Most documents made public will have a unique persistent identifier such as
Digital Object Identifier. DOI are automatically provided by the chosen
repository (Zenodo).
2.1.3 Will search keywords be provided that optimize possibilities for re-use?
To facilitate the queries by keywords, metadata of the digital objects
generated during the projects must include the term “VALITEST”. Other keywords
will belong to the harmonised vocabulary used in the domain 1 .
## Making data openly accessible
According to the _H2020 online manual_ , open access refers to the practice of
providing online access to scientific information that is free of charge to
the end-user and reusable. In the context of research and innovation,
'scientific information' can mean: peer-reviewed scientific research articles
(published in scholarly journals), or research data (data underlying
publications, curated data and/or raw data). Open access to scientific
publications means free online access for any user. The costs of open access
publishing are eligible, as stated in the Grant Agreement. Open access to
research data refers to the right to access and reuse digital research data
under the terms and conditions set out in the Grant Agreement. Users should
normally be able to access, mine, exploit, reproduce and disseminate openly
accessible research data free of charge.
2.2.1 Which data produced and/or used in the project will be made openly
available as the default? If certain datasets cannot be shared (or need to be
shared under restrictions), explain why, clearly separating legal and
contractual reasons from voluntary restrictions.
By default, the data and metadata of VALITEST will be made openly available.
However, three types of restrictions will apply:
* Open access is incompatible with rules on protecting personal data: protection of the personal right needs to be ascertained either by avoiding open access to sensitive and personal data, or by anonymising the data. Deliverable D9.1 (confidential) specifies the procedures implemented for personal data collection, storage, protection, retention and destruction.
* Open access is incompatible with the obligation to protect results that can reasonably be expected to be commercially or industrially exploited. The management board will identify project results with exploitation potential or commercial value or results that are costly and difficult to replicate. The management board will also assess the best options for exploiting these results (e.g. opt for disclosure and publication of results or protection through patent or other forms of IPR), consulting patent attorneys, IP specialist agents and officers at Knowledge Transfer Departments, if required. During every general assembly meeting, a special session on IP issues will be scheduled to enable open discussions and joint decisions on the best strategies for managing and exploiting the project results. Hence, protection of the interests of all the involved parties will be ensured. This will enable the exploitation strategy by all parties to be reviewed regularly and to make sure any relevant result is on route for exploitation (directly by the partners or indirectly by third parties) with appropriate terms and conditions for all project partners. Decisions relative to data management needed between meetings will be approved through electronic correspondence.
* Open access data may compromise the quality trademark of partners. The question of, whether or not the performance of the various tests evaluated will be disclosed, will be discussed on a case-by-case basis by WP1 leaders and kits providers.
2.2.2 How will the data be made accessible (e.g. by deposition in a
repository)?
Within the consortium, the deliverables are accessible on a restricted access
platform. For the public and other stakeholders, the deliverables are listed
on the VALITEST website. The decision to make a deliverable freely available
will follow a two steps process. i) each WP leader seeks an agreement within
the WP, ii) the WP leader informs the SC of the proposal of the WP to publish
or not the deliverable and the SC takes the final decision. Deliverables made
freely available by the SC will be uploaded on the chosen repository (Zenodo),
a link will be provided from the VALITEST website and from the DROP portal.
Validation data will be hosted by the European and Mediterranean Plant
Protection Organisation on the Section 'validation data for diagnostic tests'
of the ‘EPPO Database on Diagnostic Expertise’ ( _https://dc.eppo.int/_ ).
This database already includes validation data for diagnostic tests for
regulated pests, generated by various laboratories in EPPO member countries.
The validation data are presented according to a common format developed by
the EPPO Panel on Diagnostics and Quality Assurance. Validation data can be
submitted by any laboratory registered in the EPPO database on diagnostic
expertise. At this point, this database does not comply with Open Data policy:
it is not referenced and the data can only be visualised and downloaded as a
PDF file. During the project, the database will be referenced and the data it
contains will be made findable, accessible, interoperable and reusable.
Other data-sets will be uploaded in machine-readable format on one single
OpenAire compliant repository (Zenodo) from which data can be found through a
web browser and downloaded by a potential interested user.
Regarding peer-reviewed publications, the ‘gold’ open access should be
preferred, in this case, the article is immediately provided in open access by
the publisher. At the minimum he VALITEST partners will provide a ‘green’ open
access and archive the publications on an online OpenAire compliant repository
and ensure open access within a maximum of six months.
2.2.3 What methods or software tools are needed to access the data?
Only standard software, e.g. web browsers, pdf-file readers, and text readers,
or open licence free software, e.g. ‘R’ will be needed.
2.2.4 Where will the data and associated metadata, documentation and code be
deposited? Preference should be given to certified repositories which support
open access where possible.
Validation data will be integrated to the ‘EPPO Database on Diagnostic
Expertise’ ( _https://dc.eppo.int/_ ). During the project, the database will
be referenced and the data it contains will be made findable, accessible,
interoperable and reusable.
Concerning other data sets and ‘green’ open access scientific articles, the
management board decided to rely on the online OpenAire compliant repository
Zenodo ( _https://zenodo.org/_ ).
2.2.5 Have you explored appropriate arrangements with the identified
repository?
Appropriate arrangements will be considered when the first data sets,
deliverables or publications will be made available.
2.2.6 If there are restrictions on use, how will access be provided?
As described in 2.2.1, three types of sensitive data (including personal data)
are identified. These will be managed following the GDPR requirements. Access
to these sensitive data will be granted by the management board. The data
processing will have to correspond to one of the uses announced to the subject
during the collection of the data. Data will be transferred by electronic mail
using a format including metadata. Deliverable D9.1 (confidential) provides
more details on the procedures implemented for personal data management.
2.2.7 Is there a need for a data access committee?
Data issues will systematically be discussed in the general assembly meetings.
The access to sensitive data will be granted by the management board of the
project.
## Making data interoperable
2.3.1 Are the data produced in the project interoperable, that is allowing
data exchange and re-use between researchers, institutions, organisations,
countries, etc. (i.e. adhering to standards for formats, as much as possible
compliant with available (open) software applications, and in particular
facilitating re-combinations with different datasets from different origins)?
All the data collected or generated during the project and made public will be
available on machine readable format using common or open licence free
software, described using appropriate metadata.
2.3.2 What data and metadata vocabularies, standards or methodologies will you
follow to make your data interoperable?
For most of the data collected or generated during the project, used metadata
will follow the Dublin Core interoperable metadata standards. For the specific
validation data, used metadata will follow the standards developed by the
European and Mediterranean Plant Protection Organisation Panel on Diagnostics
and Quality Assurance.
2.3.3 Will you be using standard vocabularies for all data types present in
your data set, to allow inter-disciplinary interoperability?
To allow inter-disciplinary interoperability, all the documents and data
generated during the project will use the standard vocabulary developed by the
International Plant Protection Convention (IPPC) to provide a harmonised
internationally agreed vocabulary associated with phytosanitary measures 2 .
## Increase data re-use (through clarifying licences)
2.4.1 How will the data be licensed to permit the widest re-use possible?
For public data, the reuse of the data will be possible through the open
repositories where they will be stored.
2.4.2 When will the data be made available for re-use? If an embargo is sought
to give time to publish or seek patents, specify why and how long this will
apply, bearing in mind that research data should be made available as soon as
possible.
The specific decision on an embargo for research data will be taken by the
management board. Scientific research articles should have an open access at
the latest on publication if in an Open Access journal, or within 6 months of
publication. For research data, open access should by default be provided when
the associated research paper is available in open access.
2.4.3 Are the data produced and/or used in the project useable by third
parties, in particular after the end of the project? If the re-use of some
data is restricted, explain why.
Most of the data collected or generated during the project will be available
from open repositories, and therefore reusable by third parties, even after
the end of the project. For ethical and legal reasons, personal or sensitive
data will not be made public (deliverable 9.1 - confidential). Data concerning
intellectual property will be discussed between relevant partners, and
decision will be taken according to the European and national rules.
2.4.4 How long is it intended that the data remains re-usable?
Regarding data stored on an OpenAIRE compliant public repository, all files
stored within the repository shall be stored after the project to meet the
requirements of good scientific practice.
For data stored otherwise, researchers, institutions, journals and data
repositories have a shared responsibility to ensure long-term data
preservation. Partners must commit to preserving their datasets, on their own
institutional servers, for at least five years after publication. If, during
that time, the repository to which the data were originally submitted
disappears or experiences data loss, the partners will be required to upload
the data to another repository and publish a correction or update to the
original persistent identifier, if required.
2.4.5 Are data quality assurance processes described?
For the VALITEST consortium, it is essential to provide good quality data.
This will be ensured through various methods. Firstly, partners have existing
data quality assurance processes, which are described in their quality manual.
Secondly, publications will be disseminated using peer-reviewed journals, and
similarly, research data will be deposited on repositories providing curation
system appropriate to the data.
# Allocation of resources
## What are the costs for making data FAIR in your project?
Costs directly associated to FAIR data management have been included within
the description of the different tasks of the project.
## How will these be covered?
Costs related to open access to research data are eligible as part of the
Horizon 2020 grant (if compliant with the Grant Agreement conditions).
## Who will be responsible for data management in your project?
The management team (ANSES, [email protected]_ ) will ensure best practices
and FAIR principles in the data management of the project. Each partner will
be responsible for managing the data it uses, processes or generates in the
project. In relation with their data protection officer, each partner has
appointed a data manager ensuring the respect of DMP principles and involved
in personal data protection.
## Are the resources for long term preservation discussed (costs and
potential value, who decides and how what data will be kept and for how long)?
As an intergovernmental organisation responsible for cooperation in plant
health within the Euro-Mediterranean region, EPPO will ensure the long term
preservation and availability of the validation data on its own resources.
# Data security
## What provisions are in place for data security (including data recovery as
well as secure storage and transfer of sensitive data)?
Partners will store and process personal data according to their internal
procedures. The security of data will be ensured by means of appropriate
technical and organisational measures.
## Is the data safely stored in certified repositories for long term
preservation and curation?
Validation data will be stored on EPPO servers located in a French datacenter.
The databases are located on servers independent of the web platform. Access
to the servers is only possible from a secure network (dark fiber from EPPO HQ
or VPN). Access to raw data is only possible from the accounts of EPPO IT
Officers, through an authentication mechanism. The servers are monitored and
supervised 24/7 by a service provider. The provider may under no circumstances
access the data without the agreement of EPPO (the servers belong to EPPO).
Other public data will be stored on one single certified repository (Zenodo).
# Ethical aspects
Deliverables D9.1 (personal data management) and D9.2 (ethical standards and
guidelines in non EU countries) specifically deal with ethical aspects of the
project. These documents include the management of personal and sensitive data
and the management of data imported to, or exported from EU. No difference of
data management will be made between data generated or provided by subjects
located in the EU or in third countries.
# Important topics requiring progress and/or update in future versions of the
DMP
Consider appropriate arrangements with the selected repository
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1372_GAIN_773330.md
|
# Introduction
This document fulfils deliverable 7.3 by providing information on the data
management policy for GAIN project. This data management plan (DMP) is
required for projects participating in the Horizon 2020 data pilot. The
objectives of the DMP is to detail _“what data the project will generate,
whether and how it will be exploited or made accessible for verification and
re-use, and how it will be curated and preserved”_ .
Guidance from the H2020 Programme indicates that a data management plan should
be submitted early in the project and subsequently revised as the project
matures and more information becomes available. Thus, it is not expected that
a first version of this plan will provide complete detail on all aspects of
data management. Accordingly, updates to this document are expected in line
with interim project reviews.
The remainder of the document is structured following the Guidelines on FAIR
Data Management in Horizon 2020 (European Commission, 2016). Section 1
provides a summary of data collected, generated and re-used on the project.
Section 2 discusses making data findable, accessible, interoperable and re-
usable (FAIR). Section 3 describes the allocation of resources for making data
FAIR. Section 4 considers data security. Section 5 notes ethical aspects
related to data management. Section 6 concludes the report.
# Data summary
The GAIN Consortium includes the 20 partners listed in Table 1 and an
International Partner, namely NOAA, National Ocean and Atmospheric
Administration (US), which cooperates: as Third Party of the Coordinator,
UNIVE: non-EU partners are marked in bold.
## Table 1. GAIN Consortium
<table>
<tr>
<th>
Participant Nº (leadership role)
</th>
<th>
Participant legal name
</th>
<th>
Country
</th>
<th>
Type
</th> </tr>
<tr>
<td>
1 (Coordinator; WP5; WP7)
</td>
<td>
Universita Ca' Foscari Venezia (UNIVE)
</td>
<td>
Italy
</td>
<td>
RTD
</td> </tr>
<tr>
<td>
2 (WP3)
</td>
<td>
The University of Stirling (UoS)
</td>
<td>
UK
</td>
<td>
RTD
</td> </tr>
<tr>
<td>
3 (WP1)
</td>
<td>
Alfred-Wegener-Institut Helmholtz- Zentrum für Polar- und Meeresforschung
(AWI)
</td>
<td>
Germany
</td>
<td>
RTD
</td> </tr>
<tr>
<td>
4
</td>
<td>
IBM Ireland Limited (IBM)
</td>
<td>
Ireland
</td>
<td>
CORP 1
</td> </tr>
<tr>
<td>
5 (WP2)
</td>
<td>
Agencia Estatal Consejo Superior de Investigaciones Cientificas (CSIC)
</td>
<td>
Spain
</td>
<td>
RTD
</td> </tr>
<tr>
<td>
6 (WP4)
</td>
<td>
Longline Environment Limited (LLE)
</td>
<td>
Ireland
</td>
<td>
SME
</td> </tr>
<tr>
<td>
7 (WP6)
</td>
<td>
Sparos Lda (SPAROS)
</td>
<td>
Portugal
</td>
<td>
SME
</td> </tr>
<tr>
<td>
**8**
</td>
<td>
Salten Havbrukspark (SHP)
</td>
<td>
Norway
</td>
<td>
SME
</td> </tr>
<tr>
<td>
9
</td>
<td>
Wageningen University (WU)
</td>
<td>
Netherlands
</td>
<td>
RTD
</td> </tr>
<tr>
<td>
10
</td>
<td>
Johann Heinrich von Thuenen-Institut, Bundesforschungsinstitut Fuer Laendliche
Raeume, Wald Und Fischerei (TI)
</td>
<td>
Germany
</td>
<td>
RTD
</td> </tr>
<tr>
<td>
11
</td>
<td>
Agrifood and Biosciences Institute (AFBI)
</td>
<td>
UK
</td>
<td>
RTD
</td> </tr>
<tr>
<td>
12
</td>
<td>
Zachodniopomorski Uniwersytet Technologiczny W Szczecinie (ZUT)
</td>
<td>
Poland
</td>
<td>
RTD
</td> </tr>
<tr>
<td>
13
</td>
<td>
Asociacion Nacional de Fabricantes de Conservas de Pescados y Mariscos-
Centro Tecnico Nacional de Conservacion de Productos de la Pesca (ANFACO)
</td>
<td>
Spain
</td>
<td>
NPO 2
</td> </tr>
<tr>
<td>
**14**
</td>
<td>
Multivector AS (MV)
</td>
<td>
Norway
</td>
<td>
SME
</td> </tr>
<tr>
<td>
**15**
</td>
<td>
Gildeskal Forskningsstasjon AS (GIFAS)
</td>
<td>
Norway
</td>
<td>
SME
</td> </tr>
<tr>
<td>
16
</td>
<td>
Lebeche (LEBCH)
</td>
<td>
Spain
</td>
<td>
CORP 1
</td> </tr>
<tr>
<td>
17
</td>
<td>
Sagremarisco-Viveiros de Marisco Lda (SGM)
</td>
<td>
Portugal
</td>
<td>
SME
</td> </tr>
<tr>
<td>
18
</td>
<td>
Fondazione Edmund Mach (FEM)
</td>
<td>
Italy
</td>
<td>
NPO 2
</td> </tr>
<tr>
<td>
**19**
</td>
<td>
Dalhousie University (DAL)
</td>
<td>
Canada
</td>
<td>
RTD
</td> </tr>
<tr>
<td>
**20**
</td>
<td>
South China Sea Fisheries Research Institute (SCSFRI)
</td>
<td>
China
</td>
<td>
RTD
</td> </tr> </table>
GAIN is structured in 7 Work Packages, plus an Ethics Work Package, which was
added by the EC during the negotiation (see Fig. 1). WP leaders are indicated
in Table 1. The main objects of each WP are listed below.
WP1 - Production and Environment: will develop novel sustainable feeds and
tools for enhancing aquaculture sustainable management of aquafarm based on
Big Data analytics. WP2 - Secondary products: will develop new co-products, in
order to enhance circularity, sustainability and profitability of aquaculture
supply chains.
WP3 - Policy and markets. will analyse the state-of-the-art of EU and national
legislations with respect to the valorisation and marketing of innovative GAIN
products and co-products and provide suggestions to policy makers.
WP4 - Eco-intensification: will develop new approaches and tools for assessing
the level of eco-intensification of GAIN innovative solutions, in comparison
with standard practices. WP5 - Professional development: will deliver both on-
line and in presence courses, in order to facilitate the adoption of GAIN
innovative solutions by aquafarm operators.
WP6 - Dissemination, Exploitation, Communication: will maximize GAIN impact,
by careful matching communication&dissemination tools to targeted audiences
and developing platforms for exploiting GAIN results beyond its life time.
WP7 - Coordination: will ensure the timely delivery of all GAIN contractual
items.
WP6
WP1
Production &
environment
WP4
Eco-intensification
WP2
Secondary
outputs
WP3
Policy &
markets
Dissemination,
Exploitation,
Communication
WP5
Professional
development
Coordination
WP7
**Fig. 1. GAIN structure**
GAIN collects, generates and re-uses data of different types and from
different sources to accomplish the project objectives. In this section we
summarise: the purpose of the data collection/generation, data type-and
formats, origin of the data and expected volumes.
## Sensor data collected at pilot sites
GAIN will collect large volumes of sensor data at the pilot farm sites. These
sites include:
* a salmon farm at GIFAS production site of Rossøya, Norway;
* a salmon farm belonging to Cooke Aquaculture, which joined the GAIN Consortium as committed end-user.
* a salmon farm in Nova Scotia, Canada
* LEBCH seabass and seabream farms located near Murcia, Spain
* a rainbow trout farms, i.e. Troticoltura Leonardi, located in Trentino Alto-Adige, Northern Italy, which joined the GAIN Consortium as committed end-user.
* SGM mussel farm, located in the Algarve, Portugal.
* a set of mussel farms located in Northern Ireland, UK, which are currently been monitored by AFBI.
Data collected will include:
* time series of environmental variables, e.g. water temperature, dissolved oxygen, chlorophyll a concentration, salinity, current, etc.
* Acoustic monitoring data providing information on biomass, movement patterns, etc.
* Imagery data from in-situ camera and drone imagery.
The collection of this data is necessary to enable ‘precision aquaculture’
type analytics and better inform on farm condition, environmental status and
time evolution. This data originates at the pilot farm sites participating in
the project and is not a re-use of existing data. The data will be critical to
the Information Management System developed during the project and of
significant scientific value to the academic partners in the project. The
total size of data generated is expected to be in the range of Petabytes and
varies according to the type. For example, the acoustic monitoring system
returns very large datasets generating 2D spatial maps at five second
frequency, while the time series datasets return hourly data at five pilot
sites producing about 43,000 data points per year for each variable collected.
## Operational data collected at pilot sites
We will collect a range of operational datasets at the pilot sites. Data
collected will include:
* Production data concerning GAIN pilot sites. (e.g. total biomass, yield, feed conversion ratio).
* Data concerning husbandry practices, (e.g. feeding time, feed rations, feed compositions, parasite presence).
* Data concerning fish welfare (e.g. behaviour, sea louse counts, mortalities, fish growth and condition, fish health parameters).
The collection of this data is necessary to enable ‘precision aquaculture’
type analytics and understand how external factors influence farm productivity
and effects. This data originates at the pilot farm sites participating in the
project and is not a re-use of existing data. The data will be critical to
developing data-driven modelling related to feature extraction, predictive
analytics and decision support. Further, the data is of significant scientific
value to the academic partners in the project. Data type will encompass both
unstructured data (e.g. reporting) and structured data (e.g. biomass data) and
expected volumes will be in the Gigabytes per farm.
## Data related to feed design
We will collect a range of data to guide feed design also re-using data
collected at pilot farm sites (described above). Data collected will include:
* Microalgae species and strains, based on literature and specific experiments carried out in GAIN.
* Environmental variables affecting microalgae growth.
* Concentrations of Zn and Selenium in microalgae and culture medium.
* Economic data on microalgae production at pilot scale and industrial scale.
The collection of this data is necessary to explore the viability of using
algae as feed components. Further data will be collected related to the
selected feed formulations including:
* Data concerning feed pellet features e.g. mixing homogeneity, pellet durability index, pellet hardness, physical water stability, sinking velocity, fat absorption/leaking, starch gelatinization and nutrient leaching.
* Data related to feed trials testing the performance and FCR of the selected formulation.
These datasets are critical to developing novel feed formulation in WP1.
Data concerning feed pellets will be determined during specific tests at its
premises by SPAROS, a GAIN partner with strong expertise on feed formulation
and manufacturing. Data related to feed originates from a set of feed trials
on four selected species, namely Atlantic salmon, rainbow trout, seabream, and
turbot as a niche species, in at least 8 feeding trials performed by SPAROS
(seabream), AWI (turbot), FEM (trout) and salmon (GIFAS). Key Performance
Indicators determined in these trial are summarized in Table 2.
The expected volume of data related to feed design is in the range of 100,000
data points.
**Table 2. Key Performance Indicators determined in GAIN feed trials.**
<table>
<tr>
<th>
_Category Key performance Indicators Tissues/samples_
</th>
<th>
_Analysis_
</th>
<th>
_Species_
</th> </tr>
<tr>
<td>
_**Performance** _
</td>
<td>
_**Feed intake** _
</td>
<td>
_**Whole body** _
</td>
<td>
</td>
<td>
_**All** _
</td> </tr>
<tr>
<td>
_**Weight gain** _
</td>
<td>
_**Whole body** _
</td>
<td>
</td>
<td>
_**All** _
</td> </tr>
<tr>
<td>
_**Condition factor** _
</td>
<td>
_**Whole body** _
</td>
<td>
</td>
<td>
_**All** _
</td> </tr>
<tr>
<td>
_**Biomarkers** _ 3
</td>
<td>
_**Liver** _
</td>
<td>
_**PCR array** _
</td>
<td>
_**Seabream, salmon** _
</td> </tr>
<tr>
<td>
_**Resource efficiency** _
</td>
<td>
_**Feed conversion (FCR)** _
</td>
<td>
_**Whole body** _
</td>
<td>
</td>
<td>
_**All** _
</td> </tr>
<tr>
<td>
_**Digestibility** _
</td>
<td>
_**Feed, Faeces** _
</td>
<td>
_**Macro and micronutrients** _
</td>
<td>
_**All** _
</td> </tr>
<tr>
<td>
_**Retention efficiency** _
</td>
<td>
_**Whole body** _
</td>
<td>
_**Macro and micronutrients** _
</td>
<td>
_**All** _
</td> </tr>
<tr>
<td>
_**Health & welfare ** _
</td>
<td>
_**Mortality** _
</td>
<td>
</td>
<td>
</td>
<td>
_**All** _
</td> </tr>
<tr>
<td>
_**Enteritis** _
</td>
<td>
_**Intestine** _
</td>
<td>
_**Histology** _
</td>
<td>
_**All** _
</td> </tr>
<tr>
<td>
_**Parasitic infestation** _
</td>
<td>
_**Skin** _
_**Intestine** _
</td>
<td>
**Lepeophtheirus salmonis Enteromixum leei**
</td>
<td>
_**Salmon Seabream** _
</td> </tr>
<tr>
<td>
_**Mucosal function** _
</td>
<td>
_**Skin, gills** _
</td>
<td>
_**Histology** _ 4
</td>
<td>
_**All** _
</td> </tr>
<tr>
<td>
_**Plasma lysosyme** _
</td>
<td>
_**Plasma** _
</td>
<td>
_**Enzymatic activity** _
</td>
<td>
_**All** _
</td> </tr>
<tr>
<td>
_**Bactericidal activity** _
</td>
<td>
_**Plasma** _
</td>
<td>
</td>
<td>
_**All** _
</td> </tr>
<tr>
<td>
_**Biomarkers** _ 5
</td>
<td>
_**Head kidney** _
</td>
<td>
_**PCR array** _
</td>
<td>
_**Seabream, salmon** _
</td> </tr>
<tr>
<td>
_**Quality** _
</td>
<td>
_**Dressout loss** _
</td>
<td>
</td>
<td>
</td>
<td>
_**All** _
</td> </tr>
<tr>
<td>
_**Fillet yield** _
</td>
<td>
_**Fillet** _
</td>
<td>
</td>
<td>
_**All** _
</td> </tr>
<tr>
<td>
_**Pigment** _
</td>
<td>
_**Fillet** _
</td>
<td>
_**Visual and/or chemical** _
</td>
<td>
_**Salmon, trout** _
</td> </tr>
<tr>
<td>
_**Texture** _
</td>
<td>
_**Fillet** _
</td>
<td>
_**Shear force** _
</td>
<td>
_**All** _
</td> </tr>
<tr>
<td>
_**Taste** _
</td>
<td>
_**Fillet** _
</td>
<td>
_**Organoleptic (chefs)** _
</td>
<td>
_**All** _
</td> </tr> </table>
## Data related to aquaculture by-products
We will collect a range of data related to aquaculture by-products. Data
collected from finfish products will include:
* Data concerning potential by-products composition, in terms of specific flesh yields, fatty acid and different fractions (e.g. fillet, head, trimming, viscera, skin, etc.),
* Data concerning the yield and composition of marine peptones, protein hydrolysates, oils, minerals, collagen and gelatines obtained from Enzymatic Hydrolysis of byproducts.
Data collected will be critical to allow an analysis of potential
valorisation, and an assessment of increase of the edible proportion and waste
reduction. It will be used to determine environmental and economic benefits of
redirecting by-product fractions to innovative ecointensification. Data will
be collected from laboratory analysis during the project and is not a re-use
of existing data.
Data collected from shellfish by-products will include
* Data concerning the efficiency of shell-based materials for water purification in RAS, biofilters in Aquaponics and Phosphorus removal from land-based fish farm effluents.
* Data about the efficiency of fillers for the cement industry based on shells.
Data collected will be required to achieve project objectives related to the
valorisation of shellfish by-product. It will be used to assess the viability
of bivalve shell in several applications, namely as a biofilter in land-based
aquaculture systems and as substitute in construction industry. Data will be
collected from laboratory and prototype testing during the
project and is not a re-use of existing data. We expect to generate tens of
thousands of data points assessing each valorisation application.
## Production and consumption data
To achieve project objectives related to production and consumption of seafood
and implications for policy, the GAIN project will collect data related to the
amount of seafood consumed for different product/species in different
countries. Data will be collected based on information from large retailers,
interviews with key operators, questionnaires. The data will be collated from
a variety of sources and will hence be partly a re-use of data but will
generate a much more complete representation of production and consumption
data. The data is necessary to inform policy recommendations ensuring the most
comprehensive data. We expect the data volume to be relatively small resulting
in a few hundred data points per Country.
## GAIN model data
A range of model data will be generated during the GAIN project. These include
mechanistic models related to fish growth and data-driven models for 1)
prediction, 2) feature extraction and 3) decision support platform. These data
will be generated during the project and will not be a re-use of existing
data. The data will be critical to the precision aquaculture ambition of the
project that aims to leverage analytics to better inform farm operations. We
expect the data volumes to be in the range of 100s of Gigabytes for individual
farms
## Existing Geospatial data
GAIN intends to make use of several geospatial datasets during the project.
Specifically, we will use the following:
* Weather data (e.g. wind speeds, air temperature)
* Ocean model data (e.g. ocean currents, wave heights)
* Satellite data (e.g. sea-surface temperature, Chlorphylla)
GAIN will make use of these datasets to inform on environmental condition at
pilot farm sites. The project intends to re-use these datasets rather than
generate and a number of potential sources have been identified (e.g. MODIS
satellite data, ECMWF ocean model data. The data will be used as part of the
precision aquaculture component in WP1.
# FAIR data
This section details how GAIN will make data _**Findable, Accessible,
Interoperable, and Reusable** _ (FAIR). Each topic is addressed in turn.
## Making data findable, including provision for metadata
Following H2020 guidelines, two types of datasets are considered here:
1. **The ‘underlying data’ –** data and metadata related to scientific publication generated during the project
2. **Any other data –** for instance curated data not directly attributable to a publication, or raw data
GAIN plans to make project data findable by associating open data sets with
scientific publications. Publications may include traditional research papers
as well as “data only” articles which present and summarize data, but do not
offer in-depth analysis or draw conclusions. A scientific paper about a
dataset is an ideal forum to provide metadata, discuss naming conventions,
point to standards, and list relevant keywords. Publication venues which
utilize Digital Object Identifiers will be favoured for GAIN and all attempts
will be made to associate unique DOI with research data also.
## Making data openly accessible
Data will be made openly available by uploading to an appropriate repository.
Data which is uploaded to a repository will use an interoperable format with
the exact format dependent on the type and volume of data together with domain
conventions and standards. Example data formats will be text files of comma
separated values and CF convention NetCDF files. We will use the _Zenodo_
repository (zenodo.org) provided by CERN for open datasets generated during
the project. Datasets made publicly available will not require restricted
access.
The close connection between _Zenodo_ and Horizon 2020 projects made it a
natural and easy fit. We created a _Zenodo_ community for the GAIN project
curated by IBM. This links directly to the _OpenAire_ page for the GAIN
project, provides a concise summary of publications emanating from the project
and all GAIN datasets will readily accessible by searching for the ‘GAIN
community in _Zenodo_ . Further, any dataset uploaded to _Zenodo_ is assigned
a unique DOI, which makes referencing (e.g. in publications) very convenient.
Table summarises the datasets being generated in GAIN and open-access status.
### Table 3: Perspectives on open-access for GAIN datasets
<table>
<tr>
<th>
**Dataset**
</th>
<th>
**Perspective on open-access**
</th>
<th>
**Plan to open**
</th> </tr>
<tr>
<td>
Sensor data collected at
pilot sites
</td>
<td>
Data related to environmental data collected at farm sites is scientifically
useful but often commercially sensitive
</td>
<td>
Ongoing communication with farm owners to separate commercially sensitive data
from data that can be
released
</td> </tr>
<tr>
<td>
Data on farm operations
</td>
<td>
Data related to farm operations are often commercially sensitive
</td>
<td>
NO
</td> </tr>
<tr>
<td>
Data related to feed
design
</td>
<td>
Data related to feed design may be commercially exploitable;
</td>
<td>
?
</td> </tr>
<tr>
<td>
**Dataset**
</td>
<td>
**Perspective on open-access**
</td>
<td>
**Plan to open**
</td> </tr>
<tr>
<td>
</td>
<td>
however we will explore making open partial sets of the data
</td>
<td>
</td> </tr>
<tr>
<td>
Data related to
aquaculture by-products
</td>
<td>
Data may be commercially exploitable but we will explore making open partial
sets of the data
</td>
<td>
?
</td> </tr>
<tr>
<td>
Production and
consumption data
</td>
<td>
Some production and consumption data may be commercially sensitive (retailer
data), but we will explore making open, possibly with suitable anonymisation
and
aggregation
</td>
<td>
As stated in GAIN Deliverable
8.1, confidential information on production and consumption will be collected
only upon informed consent of the data owners. Informed consent forms will
clearly state that the data will be used only after anonymisation and
aggregation
</td> </tr>
<tr>
<td>
GAIN model data
</td>
<td>
Data related to environmental and operational insights on farm production are
often
commercially sensitive
</td>
<td>
Ongoing communication with farm owners to separate commercially sensitive data
from data that can be released.
</td> </tr>
<tr>
<td>
Existing geospatial data
</td>
<td>
These are re-used data which are either already openly available or which the
project does not have the right to publish.
</td>
<td>
NO
</td> </tr> </table>
## Interoperability of data
Interoperability of GAIN data will be accomplished through standardised data
formats (and we aim to promote standardisation of data formats through GAIN)
and appropriate metadescription and documentation. We plan to accompany open
data with a scientific article or technical white paper to promote reuse. GAIN
aims to promote semantic mapping and relevant ontologies for aquaculture farms
to promote interoperability and increase access to individuals from outside
domains (particularly from data science fields where the focus is much more on
the data itself rather than domain characteristics).
## Data re-use
Re-use of data will be encouraged through clear licensing, prompt
dissemination, archiving, and quality assurance. At this writing, license for
GAIN data has not yet been selected. We note the preference expressed in the
OpenAIRE Licensing Study (Dietrich et. al. 2014) for version 4.0 of the
Creative Commons Licenses. Dissemination of GAIN data will promptly follow
publication of related scientific papers. We plan to disseminate data once
papers have been accepted for publication. Data published from GAIN will be
disseminated through Zenodo facilitating re-use of the data by third parties
for an indefinite period after the project is over. Data published from the
project will undergo a quality assurance process managed by the individuals
responsible for the scientific publication
# Allocation of resources
The project budget allocates resources to make GAIN data FAIR. Tasks in the
project which are related to open data include
* Task 1.2 related to developing novel feed components
* Task 1.3 Instrumentation of commercial aquaculture facilities
* Task 1.4 Development of a real-time Information Management System
* Task 3.2 Assessment of production and consumption data and implications for policy Once data has been made FAIR during the course of the project and placed in a repository, further costs are not anticipated for the project team. Benefits of long term preservation include historical benchmarking and comparison activities.
# Data Security
The security of data collected and generated during GAIN is important to the
success of the project and essential to providing subsequent open access to
selected project data. In this regard, care will be taken to facilitate data
recovery, provide adequate data storage and enable transfer of sensitive data
as required by the project.
Responsibility for data recovery falls to individual software and hardware
components of the GAIN system. Much of the data considered for open access
will move through the cloud service platform for real-time management of
aquaculture data. The data recovery strategy for cloud service platform will
rely on the recovery infrastructure of the cloud environment where it is
hosted. Box provides a secure storage environment for data in the cloud. Where
it is necessary to transfer sensitive data, care will be taken to use
encrypted connections. Taking these measures in relevant activities
contributes to data security on the GAIN project.
# Ethical aspects
The ethics review for the GAIN project raised the issues of:
* informed consent for the collection, storage, and protection of personal data.
* In case personal data being transferred from/to a non-EU country or international organisation, confirmation that this complies with national and EU legislation, together with the necessary authorisations.
These issues are treated in project Deliverables 8.1 and 8.2.
# Conclusions
This deliverable describes the data management plan for the GAIN project and
our participation in the Open Data pilot. The range of data being collected,
generated and used as part of the project are described; data typology and
volume are presented together with the utility both towards the GAIN project
and for the wider scientific community. A detailed description of open-access
procedures are provided together with an assessment of the datasets that can
be made open either partially or in totality. A final decision on providing
any particular dataset as open data will be documented in a future revision of
this report.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1373_GAIN_773330.md
|
# Executive Summary
A data management plan (DMP) is required for projects participating in the
Horizon 2020 data pilot. The objectives of the DMP is to detail _“what data
the project will generate, whether and how it will be exploited or made
accessible for verification and re‐use, and how it will be curated and
preserved”_ .
GAIN DMP was structured in accordance with the Guidelines on FAIR Data
Management in Horizon 2020 (European Commission, 2016): the plan is presented
in
(O’Donncha F. & Pastres R., 2018) which was submitted at an early stage of the
project, Month 6, as required by GAIN Description of Action.
At the end of the first reporting period, the present document reviews GAIN
DMP, in order to identify gaps in its early version and amend the plan, if
required. The document includes:
1. a summary of data has been collected, generated and re‐used during the first reporting period;
2. an assessment of the strategy implemented for making data findable, accessible, interoperable and re‐usable (FAIR)
3. an assessment of the data security measures adopted, including
The main conclusions are:
1. the data collected during the first reporting period fell into the data typologies identified in D7.3 and the strategy and protocols for archiving them proved effective: therefore, there is no need to change them;
2. data processing and dissemination of data/results will take place mainly during Month 18‐ Month 42: based on the limited experience pertaining the first reporting period, the strategy for making data findable, accessible, interoperable and re‐usable (FAIR) seems adequate;
3. thus far, no leaking of personal data and sensitive data has been detected, thus confirming that the security and data protection measures adopted in GAIN proved to be effective.
Deliverable 7.4
# Summary of data collected during the first reporting period
As foreseen in (O’Donncha F. & Pastres R., 2018), during the first reporting
period GAIN collected, generated and re‐used data of different types and from
different sources. In this section we summarise the purpose of the data
collection/generation and provide an overview of the data collected thus far,
in order to identify deviations from the data typeand formats and archiving
protocols planned at an early stage of the project and summarized in
(O’Donncha F. & Pastres R., 2018).
## Sensor data collected at pilot sites
GAIN collected large volumes of sensor data at the pilot farm sites. The data
collection is ongoing and will end at M36 (April 2021). The 10 GAIN Pilot
sites are described in detail in (Service, M. et al., 2019): their main
features are summarized in Table 1.
Table 1 ‐ Summary of Pilot Sites
<table>
<tr>
<th>
**Pilot Site**
</th>
<th>
**Country**
</th>
<th>
**Species**
</th>
<th>
**Type of Aquaculture**
</th>
<th>
**Partner**
</th> </tr>
<tr>
<td>
1) Dundrum Bay
</td>
<td>
Northern
Ireland ‐ UK
</td>
<td>
Oysters ( _Magellana_
_gigas_ )
</td>
<td>
Shellfish Aquaculture
</td>
<td>
AFBI
</td> </tr>
<tr>
<td>
2) Belfast Lough
</td>
<td>
Northern
Ireland ‐ UK
</td>
<td>
Mussels ( _Mytilus_
_edulis_ )
</td>
<td>
Shellfish Aquaculture
</td>
<td>
AFBI
</td> </tr>
<tr>
<td>
3) Sagres
</td>
<td>
Portugal
</td>
<td>
Mussels ( _Mytilus_
_galloprovincialis_ )
</td>
<td>
Shellfish Aquaculture
</td>
<td>
SGM
</td> </tr>
<tr>
<td>
4) Rossøya Nord
</td>
<td>
Norway
</td>
<td>
Salmon ( _Salmo salar_ )
</td>
<td>
Sea cages
</td>
<td>
GIFAS
</td> </tr>
<tr>
<td>
5) Carness Bay
</td>
<td>
Scotland
</td>
<td>
Salmon ( _Salmo salar_ )
</td>
<td>
Sea cages
</td>
<td>
UoS
</td> </tr>
<tr>
<td>
6) McNutt’s Island,
Shelburne
</td>
<td>
Canada
</td>
<td>
Salmon ( _Salmo salar_ )
</td>
<td>
Sea cages
</td>
<td>
DAL
</td> </tr>
<tr>
<td>
7) El Gorguel, Cartagena
</td>
<td>
Spain
</td>
<td>
Seabass ( _Dicentrarchus_
_labrax_ )
</td>
<td>
Sea cages
</td>
<td>
LEBCH
</td> </tr>
<tr>
<td>
8) Preore, Troticoltura
Leonardi
</td>
<td>
Italy
</td>
<td>
Rainbow trout
( _Oncorhyncus mykiss_ )
</td>
<td>
Land‐based raceways
</td>
<td>
UNIVE
</td> </tr>
<tr>
<td>
9) FES NOWE Czarnowo
</td>
<td>
Poland
</td>
<td>
Carp ( _Cyprinius carp_ )
</td>
<td>
Land‐based RAS/pond
</td>
<td>
ZUT
</td> </tr>
<tr>
<td>
10) Fenzhou Village
</td>
<td>
China
</td>
<td>
Shrimp ( _Litopenaeus_
_vannmei_ , _Macrobrachium_
_roserbergii_ )
</td>
<td>
Land‐based pond
</td>
<td>
SCSFRI
</td> </tr> </table>
The variables collected at each site are summarized in Table 2 and 3: data
collection frequencies, f d , are summarized as follows:
* H : f d 1/hour
* D : 1/day f d < 1/hour
* W: 1/week f d < 1/day
* F: 1/2 weeks f d < 1/week
* M: 1/month f d < 1/week
* S: fd 1/month and/or as a response to observed anomalies.
In accordance with the theoretical framework of Precision Fish Farming, these
variables are classified as environmental and animal variables.
Table 2. Summary of environmental variables monitored on GAIN pilot sites
<table>
<tr>
<th>
</th>
<th>
**Site 1**
</th>
<th>
**Site 2**
</th>
<th>
**Site 3**
</th>
<th>
**Site 4**
</th>
<th>
**Site 5**
</th>
<th>
**Site 6**
</th>
<th>
**Site 7**
</th>
<th>
**Site 8**
</th>
<th>
**Site 9**
</th>
<th>
**Site 10**
</th> </tr>
<tr>
<td>
Water
Temperature
</td>
<td>
H
</td>
<td>
H
</td>
<td>
H
</td>
<td>
H
</td>
<td>
H
</td>
<td>
H
</td>
<td>
H
</td>
<td>
H
</td>
<td>
H/D
</td>
<td>
D
</td> </tr>
<tr>
<td>
Salinity
</td>
<td>
H
</td>
<td>
H
</td>
<td>
H
</td>
<td>
H
</td>
<td>
D
</td>
<td>
</td>
<td>
</td>
<td>
H
</td>
<td>
</td>
<td>
D
</td> </tr>
<tr>
<td>
Dissolved Oxygen
</td>
<td>
H
</td>
<td>
H
</td>
<td>
H
</td>
<td>
H
</td>
<td>
H
</td>
<td>
H
</td>
<td>
H
</td>
<td>
H
</td>
<td>
H/D
</td>
<td>
D
</td> </tr>
<tr>
<td>
pH
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
D
</td>
<td>
H/D
</td>
<td>
D
</td> </tr>
<tr>
<td>
ORP
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
D
</td>
<td>
</td>
<td>
W
</td> </tr>
<tr>
<td>
Turbidity
</td>
<td>
H
</td>
<td>
H
</td>
<td>
</td>
<td>
</td>
<td>
D
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Chlorophyll a
</td>
<td>
H
</td>
<td>
H
</td>
<td>
H
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
H
</td>
<td>
</td>
<td>
</td>
<td>
W
</td> </tr>
<tr>
<td>
Tryptophan
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
N‐NH4
</td>
<td>
</td>
<td>
F
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
D
</td>
<td>
W
</td>
<td>
D
</td> </tr>
<tr>
<td>
N‐NO3
</td>
<td>
</td>
<td>
F
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
H
</td>
<td>
</td>
<td>
D
</td> </tr>
<tr>
<td>
P‐SRP
</td>
<td>
</td>
<td>
F
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Si‐SiO2
</td>
<td>
</td>
<td>
F
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
POM
</td>
<td>
</td>
<td>
F
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
W
</td> </tr>
<tr>
<td>
POC
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
W
</td> </tr>
<tr>
<td>
TSS
</td>
<td>
</td>
<td>
F
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
W
</td> </tr> </table>
Table 3. Summary of animal variables monitored on GAIN pilot sites
<table>
<tr>
<th>
</th>
<th>
**Site 1**
</th>
<th>
**Site 2**
</th>
<th>
**Site 3**
</th>
<th>
**Site 4**
</th>
<th>
**Site 5**
</th>
<th>
**Site 6**
</th>
<th>
**Site 7**
</th>
<th>
**Site 8**
</th>
<th>
**Site 9**
</th>
<th>
**Site 10**
</th> </tr>
<tr>
<td>
**Size**
**(weight/length) distribution**
</td>
<td>
F
</td>
<td>
F
</td>
<td>
M
</td>
<td>
H
</td>
<td>
S
</td>
<td>
S
</td>
<td>
S
</td>
<td>
H
</td>
<td>
W/M
</td>
<td>
W
</td> </tr>
<tr>
<td>
**Biomass**
</td>
<td>
S
</td>
<td>
S
</td>
<td>
S
</td>
<td>
H
</td>
<td>
S
</td>
<td>
S
</td>
<td>
S
</td>
<td>
W
</td>
<td>
W/M
</td>
<td>
S
</td> </tr>
<tr>
<td>
**Relative biomass distribution in cages**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
H
</td>
<td>
H
</td>
<td>
H
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Feeding activity**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
H
</td>
<td>
H
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Fish speed and location**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
H
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Parasites**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
M
</td>
<td>
S
</td>
<td>
S
</td>
<td>
S
</td>
<td>
S
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Welfare indicators**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
M
</td>
<td>
M
</td>
<td>
M
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Mortality**
</td>
<td>
S
</td>
<td>
S
</td>
<td>
S
</td>
<td>
D
</td>
<td>
W
</td>
<td>
W
</td>
<td>
D
</td>
<td>
D
</td>
<td>
D
</td>
<td>
S
</td> </tr> </table>
The data are fed to the Information Management System, which is being
developed in Task 1.4 “Development of a real‐time Information Management
System” As described in detail in (O’Donncha F., et al., 2019), a protocol for
standardizing the data flow from pilot sites to the platform was set and
implemented, in order to ensure proper data archiving and interoperability.
The total size of data generated is in accordance with the estimate provided
in
much smaller data but in a more unstructured format, Examples of these type of
measurements coming from Rossøya Nord are:
* 'Welfare‐Score_Condition',
* 'Welfare‐Score_Deformity', 'Welfare‐Score_EyeHealth',
* 'Welfare‐Score_FinDamage',
* 'Lice‐Average_chalimus',
* 'Lice‐Average_matureFemale',
* 'Lice‐Average_pre‐adult',
* 'Mortality',
## Data related to feed design
During the first reporting period, the collection of data concerning feed
design, manufacturing with innovative ingredients and testing was planned in
detail.
Task 1.1 concerns the identification of microalgae strains which could
bioaccumulate Zinc (Zn) and Selenium (Se). These trace elements are very
important for fish growth and welfare: small amounts of Zn and Se enriched
microalgae in feed could therefore enhance feed performances. Therefore, in
Task 1.1 the following data are being collected.
* Variables affecting microalgae growth, e.g. light intensity, water temperature, concentration of macronutrients in the culture medium.
* Concentrations of Zinc and Selenium in microalgae and culture medium.
* Economic data on microalgae production at pilot scale and industrial scale.
The first feed trials are being carried out and preliminary results about feed
Key Performance Indicators, see Table 4, are available, as described in GAIN
Deliverable D1.4. Data concerning feed pellet features e.g. mixing
homogeneity, pellet durability index, pellet hardness, physical water
stability, sinking velocity, fat absorption/leaking, starch gelatinization and
nutrient leaching.
No marked deviation from the GAIN DoA and D7.3 can be expected at this stage.
Table 4. Key Performance Indicators for assessing novel feeds.
<table>
<tr>
<th>
**Category**
</th>
<th>
**Key performance**
**Indicators**
</th>
<th>
**Tissues/samples**
</th>
<th>
**Analysis**
</th>
<th>
**Species**
</th> </tr>
<tr>
<td>
Performance
</td>
<td>
Feed intake
</td>
<td>
Whole body
</td>
<td>
</td>
<td>
All
</td> </tr>
<tr>
<td>
Weight gain
</td>
<td>
Whole body
</td>
<td>
</td>
<td>
All
</td> </tr>
<tr>
<td>
Condition factor
</td>
<td>
Whole body
</td>
<td>
</td>
<td>
All
</td> </tr>
<tr>
<td>
Biomarkers 1
</td>
<td>
Liver
</td>
<td>
PCR array
</td>
<td>
Seabream, salmon
</td> </tr>
<tr>
<td>
Resource efficiency
</td>
<td>
Feed conversion (FCR)
</td>
<td>
Whole body
</td>
<td>
</td>
<td>
All
</td> </tr>
<tr>
<td>
Digestibility
</td>
<td>
Feed, Faeces
</td>
<td>
Macro and micronutrients
</td>
<td>
All
</td> </tr>
<tr>
<td>
Retention efficiency
</td>
<td>
Whole body
</td>
<td>
Macro and micronutrients
</td>
<td>
All
</td> </tr>
<tr>
<td>
Health & welfare
</td>
<td>
Mortality
</td>
<td>
</td>
<td>
</td>
<td>
All
</td> </tr>
<tr>
<td>
Enteritis
</td>
<td>
Intestine
</td>
<td>
Histology
</td>
<td>
All
</td> </tr>
<tr>
<td>
Parasitic infestation
</td>
<td>
Skin
Intestine
</td>
<td>
_Lepeophtheirus salmonis_ _Enteromixum leei_
</td>
<td>
Salmon Seabream
</td> </tr> </table>
<table>
<tr>
<th>
**Category**
</th>
<th>
**Key performance**
**Indicators**
</th>
<th>
**Tissues/samples**
</th>
<th>
**Analysis**
</th>
<th>
**Species**
</th> </tr>
<tr>
<td>
</td>
<td>
Mucosal function
</td>
<td>
Skin, gills
</td>
<td>
Histology 2
</td>
<td>
All
</td> </tr>
<tr>
<td>
Plasma lysosyme
</td>
<td>
Plasma
</td>
<td>
Enzymatic activity
</td>
<td>
All
</td> </tr>
<tr>
<td>
Bactericidal activity
</td>
<td>
Plasma
</td>
<td>
</td>
<td>
All
</td> </tr>
<tr>
<td>
Biomarkers 3
</td>
<td>
Head kidney
</td>
<td>
PCR array
</td>
<td>
Seabream, salmon
</td> </tr>
<tr>
<td>
Quality
</td>
<td>
Dressout loss
</td>
<td>
</td>
<td>
</td>
<td>
All
</td> </tr>
<tr>
<td>
Fillet yield
</td>
<td>
Fillet
</td>
<td>
</td>
<td>
All
</td> </tr>
<tr>
<td>
Pigment
</td>
<td>
Fillet
</td>
<td>
Visual and/or chemical
</td>
<td>
Salmon, trout
</td> </tr>
<tr>
<td>
Texture
</td>
<td>
Fillet
</td>
<td>
Shear force
</td>
<td>
All
</td> </tr>
<tr>
<td>
Taste
</td>
<td>
Fillet
</td>
<td>
Organoleptic (chefs)
</td>
<td>
All
</td> </tr> </table>
## Data related to aquaculture by‐products
The enhancement of circular economy in the aquaculture sector is one of the
pillar of GAIN approach to ecological intensification. Data collected from
finfish products included:
* Data concerning potential by‐products composition, in terms of specific flesh yields, fatty acid and different fractions (e.g. fillet, head, trimming, viscera, skin, etc.), as detailed in (Malcorps W, et al., 2019).
* Data concerning the yield and composition of marine peptones, protein hydrolysates, oils, minerals, collagen and gelatines obtained from Enzymatic Hydrolysis of byproducts, as detailed in (Vazquez J.A., et al., 2019).
* Data concerning yield and safety of by‐products from the innovative processes for mortality disposal and RAS wastewater treatment.
These data are being collected from laboratory analysis.
Data collected from shellfish by‐products included:
* Data concerning the efficiency of shell‐based biofilters for water purification in RAS and Aquaponics and for Phosphorus removal from land‐based fish farm effluents.
Data will be used to assess the viability of bivalve shell in several
applications, in land‐based aquaculture systems and as substitute in
construction industry. We expect to generate tens of thousands of data points
assessing each valorisation application.
## Production and consumption data
To achieve project objectives related to production and consumption of seafood
and implications for policy, the GAIN project will collect data related to the
amount of seafood consumed for different product/species in different
countries. Data will be collected based on information from large retailers,
interviews with key operators, questionnaires, ... . The data will be collated
from a variety of sources and will hence be partly a re‐use of data but will
generate a much more complete representation of production and consumption
data. The data is necessary to inform policy recommendations ensuring the most
comprehensive data. We expect the data volume to be relatively small resulting
in a few hundred data points per Country.
## GAIN model data
A range of model data has already been generated during the first reporting
period. These include mechanistic models related to fish growth and
data‐driven models for 1) prediction, 2) feature extraction and 3) decision
support platform. These data will be generated during the project and will not
be a re‐use of existing data. The data will be critical to the precision
aquaculture ambition of the project that aims to leverage analytics to better
inform farm operations. We expect the data volumes to be in the range of 100s
of Gigabytes for individual farms
## Existing Geospatial data
GAIN is using of several geospatial datasets during the project, such as
* Meteorological data (e.g. wind speeds, air temperature)
* Ocean model data (e.g. ocean currents, wave heights)
* Satellite data (e.g. sea‐surface temperature, Chlorphylla)
GAIN is using these datasets to inform on environmental condition at pilot
farm sites. The project intends to re‐use these datasets rather than generate
and a number of potential sources have been identified (e.g. MODIS satellite
data, ECMWF ocean model data). The data will be used as part of the precision
aquaculture component in WP1 ‐ Production and Environment.
# FAIR data
The steps for making data _**Findable, Accessible, Interoperable, and
Reusable** _ (FAIR) are described in detail in (O’Donncha F. & Pastres R.,
2018). In summary:
**Findable**
GAIN plans to make project data findable by associating open data sets with
scientific publications. Publication venues which utilize Digital Object
Identifiers will be favoured for GAIN and all attempts will be made to
associate unique DOI with research data (the facility of being able to
associate DOI to data is one of the reasons we favoured Zenodo). We have
posted preprint of scientific publications in our public repository to ensure
availability to the scientific community.
**Accessible**
Collected data which are non‐personal and non‐confidential are, at present,
made available to project partners through the project hub. Data which is
uploaded to this repository are available in an interoperable format,
depending on the type and volume of data as well as on domain conventions and
standards. Example data formats are: text files of comma separated values and
CF convention NetCDF files. We will use the Zenodo repository (zenodo.org)
provided by CERN for open datasets generated during the project. Datasets made
publicly available will not require restricted access.
We created a Zenodo community for the GAIN project curated by IBM
(https://zenodo.org/communities/gain_h2020/). To protect the integrity of the
repository, we enforced a number of conditions of use, namely:
Only GAIN partners are allowed to upload new data to the community.
Institutions uploading data for this community are responsible to ensure that:
1. they do not upload any sensitive personal data
2. they do not upload regulated data, i.e. medical data, defence/judicial data, export regulated data or any export sensitive data
The Zenodo repository links directly to the OpenAire page for the GAIN
project, provides a concise summary of publications emanating from the project
and all GAIN datasets will readily accessible by searching for the ‘GAIN
community in Zenodo. Further, any dataset uploaded to Zenodo is assigned a
unique DOI, which makes referencing (e.g. in publications) very convenient.
Table summarises the datasets being generated in GAIN and open‐access status.
Table 5: Perspectives on open‐access for GAIN datasets
<table>
<tr>
<th>
**Dataset**
</th>
<th>
**Perspective on open‐access**
</th>
<th>
**Plan to open**
</th> </tr>
<tr>
<td>
Sensor data collected at pilot sites
</td>
<td>
Data related to environmental data collected at farm sites is scientifically
useful but often commercially sensitive
</td>
<td>
Ongoing communication with farm owners to separate commercially sensitive data
from data that can be
released
</td> </tr>
<tr>
<td>
Data on farm operations
</td>
<td>
Data related to farm operations are often commercially sensitive
</td>
<td>
NO
</td> </tr>
<tr>
<td>
Data related to feed design
</td>
<td>
Data related to feed design may be commercially exploitable; however, we will
explore making open partial sets of the data
</td>
<td>
Results of trials will be published and data concerning KPI listed in Table
4 will be made available.
</td> </tr>
<tr>
<td>
Data related to
aquaculture by‐products
</td>
<td>
Data may be commercially exploitable, but we will explore making open partial
sets of the data
</td>
<td>
Results concerning optimal enzymatic hydrolysis conditions, and main
parameters leading to cost effective drying of RAS wastewater and mortalities
will be published in peer reviewed papers.
</td> </tr>
<tr>
<td>
Production and consumption data
</td>
<td>
Some production and consumption data may be commercially sensitive (retailer
data), but we will explore making open, possibly with suitable anonymisation
and
aggregation
</td>
<td>
As stated in GAIN Deliverable
8.1, confidential information on production and consumption will be collected
only upon informed consent of the data owners. Informed consent forms will
clearly state that the data will be used only after anonymisation and
</td> </tr>
<tr>
<td>
**Dataset**
</td>
<td>
**Perspective on open‐access**
</td>
<td>
**Plan to open**
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
aggregation
</td> </tr>
<tr>
<td>
GAIN model data
</td>
<td>
Data related to environmental and operational insights on farm production are
often
commercially sensitive
</td>
<td>
Ongoing communication with farm owners to separate commercially sensitive data
from data that can be
released.
</td> </tr>
<tr>
<td>
Existing geospatial data
</td>
<td>
These are re‐used data which are either already openly available or which the
project does not have the right to publish.
</td>
<td>
NO
</td> </tr> </table>
## Interoperability of data
As foreseen in (O’Donncha F. & Pastres R., 2018), interoperability of GAIN
data has been accomplished through standardised data and appropriate
meta‐description and documentation. We plan to accompany open data with a
scientific article or technical white paper to promote reuse. GAIN aims to
promote semantic mapping and relevant ontologies for aquaculture farms to
promote interoperability and increase access to individuals from outside
domains (particularly from data science fields where the focus is much more on
the data itself rather than domain characteristics). We have made public the
code to upload and download data to our cloud service to ensure that the
approaches we use are transparent and visible to others (note that for
security reasons, user credentials are required to actually upload or download
data).
## Data re‐use
Re‐use of data has been encouraged through clear licensing, prompt
dissemination, archiving, and quality assurance. At this writing, license for
GAIN data has not yet been selected. We note the preference expressed in the
OpenAIRE Licensing Study (Dietrich et. al. 2014) for version 4.0 of the
Creative Commons Licenses. Dissemination of GAIN data will promptly follow
publication of related scientific papers. We plan to disseminate data once
papers have been accepted for publication. Data published from GAIN will be
disseminated through Zenodo facilitating re‐use of the data by third parties
for an indefinite period after the project is over. Data published from the
project will undergo a quality assurance process managed by the individuals
responsible for the scientific publication
## Data Security
The security of data collected and generated during GAIN is important to the
success of the project and essential to providing subsequent open access to
selected project data. In this regard, care has been taken to facilitate data
recovery, provide adequate data storage, through the project hub, and enable
transfer of sensitive data. Partners, however are responsible for transferring
the data to the project hub and Information Management System for real‐time
management of aquaculture data. The data recovery strategy for this cloud
service platform relies on the recovery infrastructure of the cloud
environment where it is hosted: see (O’Donncha F., et al., 2019) for more
details. Where it is necessary to transfer sensitive data, care will be taken
to use encrypted connections. Taking these measures in relevant activities
contributes to data security on the GAIN project. Thus far, the operational
protocols adopted in GAIN has ensured that no data leaking has occurred.
## Ethical aspects
The ethics review for the GAIN project raised the issues of:
* informed consent for the collection, storage, and protection of personal data.
* In case personal data being transferred from/to a non‐EU country or international organisation, confirmation that this complies with national and EU legislation, together with the necessary authorisations.
These issues are treated in (Pastres R., & Licata C., 2018).
## Conclusions
This deliverable reviews the data management plan for the GAIN project
presented in (O’Donncha F. & Pastres R., 2018), in order to identify order to
identify gaps in its early version and amend the plan, if required.
The main conclusions are:
1. the data collected during the first reporting period fell into the data typologies identified in (O’Donncha F. & Pastres R., 2018) and the strategy and protocols for archiving them proved effective: therefore, there is no need to change them;
2. data processing and dissemination of data/results will take place mainly during Month 18‐ Month 42: based on the limited experience pertaining the first reporting period, the strategy for making data findable, accessible, interoperable and re‐usable (FAIR) seems adequate;
3. thus far, no leaking of personal data and sensitive data has been detected, thus confirming that the security and data protection measures adopted in GAIN proved to be effective.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1376_RELACS_773431.md
|
# Executive summary
The aim of designing a RELACS data management plan is to define the project
data management policy i.e. how data may be shared and exploited, and how data
will be curated and preserved to i) efficiently exploit data in each WP and
between various WP by cross-analysis, ii) use efficiently experimental data
for modelling activities and inversely, and iii) to save data and associated
metadata for future use after the project. The RELACS Collaborative Workspace
Platform will be the common support for the collection of data. Each WP leader
will ensure that data are correctly deposited on the platform and manage the
rights for an efficient functioning. The responsible of the Collaborative
Workspace will manage the functioning and the organisation of the Workspace.
The coordinator is responsible for the whole process.
# Aim and approach
The aim of designing a RELACS Collaborative Workspace Platform is to bring
together data acquired in the RELACS project, in order to i) efficiently
valorise data in each WP and between various WP by cross-analysis, ii) use
efficiently experimental data for modelling and iii) to save data and
associated metadata for future use after the project. Particular attention is
paid to the non-disclosure of data that may compromise justified interests of
the SMEs and of data with exploitation potential. Data not requiring
protection is discoverable, accessible, assessable, interoperable and useable
beyond their original purpose. The data generated, or used by the project and
not covered by data protection is managed and shared using the secure
internet-based “RELACS Collaborative Workspace Platform”. This allows data to
be Findable, Accessible, Interoperable and Reusable (FAIR). During the project
lifetime, access to data is restricted to project partners to help ensure full
exploitation.
Curation and preservation: RELACS data generated (except proprietary
background or foreground IP) a archived in the data archive facility (“Data
Resource Centre”) provided by FiBL by the administrative and scientific
coordination (WP9). The archiving facility will continue to be hosted by FiBL
beyond the RELACS project for future analysis by the scientific community and
the organic sector, on FiBL´s own resources.The size of the data to be stored
might range between 10 GB and 1TB.
**Figure 1** Data management policy of RELACS
Exploitation: The Ex. Com, assisted by the IPACC, identifies project results
with exploitation potential or commercial value or results that are costly and
difficult to replicate. The Ex, Com. assesses the best options for exploiting
these results (e.g. opt for disclosure and publication of results or
protection through patent or other forms of IPR), consulting patent attorneys,
IP specialist agents and officers at the partners’ Knowledge Transfer
Departments if required. During the general meetings, a special session on IP
issues will be scheduled to enable open discussions and joint decisions on the
best strategies for managing and exploiting the project results. Hence,
protection of the interests of all the involved parties are ensured. This will
enable the exploitation strategy by all parties to be reviewed regularly and
to make sure any relevant result is on route for exploitation (directly by the
partners or indirectly by third parties) with convenient terms and conditions
for all project partners. Decisions relating to data management needed between
meetings will be approved through correspondence.
Concerning the data generated and not covered by data protection, the project
will by default provide open access in line with the “Open Access to research
data” policy. This will ensure facilitated access, re-use and preservation of
its research data as set in the article 29.3 of the Model Grant Agreement.
Fundamental scientific results will be freely disseminated through appropriate
channels: scientific publications, presentations at international conferences
and workshops, etc. Data (not susceptible to be exploited further by project
partners and anonymised if needed) will be disclosed 12-24 months after the
end of the project. New data will become accessible at regular intervals
provided that they have been fully exploited by project partners to make
possible for third parties to access, mine, exploit, reproduce and
disseminate, free of charge for any user in compliance for open access rules.
After the project end, beneficiaries will deposit the digital research data
generated in their institutional research OpenAire compliant data repository.
The procedure to follow will be regulated by the Consortium Agreement and
advertised on the project website.
# Organisation and functioning
FiBL has the responsibility of designing the RELACS Collaborative workspace
and managing its functioning (described in details in Deliverable D9.3). This
workspace will be partly used to save experimental data acquired during the
RELACS project, and also to capitalise on “acquired data” of partners who
would agree to have them shared at the benefit of the project (specified as
background in the “backgrounds” annex of the Consortium Agreement).
The coordinator of RELACS, as WP9 leader, is supervising the full process and
helps each WP leader to ensure the efficient use of the RELACS Collaborative
Workspace Platform, and for solving eventual difficulties to obtain data from
partners.
The WP leader organize the experimental work in the WPs by ensuring that
common protocols are implemented (common or specific protocols as decided by
the partners under the responsibility of the WP leader). Each Research
Team located in several countries and institutes carries out the experimental
work in the WPs. The RELACS Collaborative Workspace Platform allows to bring
together in the same support and the same format the various and heterogeneous
experimental data collected. The WP leaders organise the design of Template
Files for collecting data. The Research Teams collaborate with the WP leader
by providing data and by depositing the labelled files and cross-referencing
in the directory of the RELACS Database. The RELACS Collaborative Workspace
Platform also comprises a common description and labelling scheme to allow for
clear identification of experiment ID, owner of data, study directors,
experimental designs, codes and data.
## Data types
RELACS data will include: (i) Survey data (WP1-6) to assess statistics and
expert opinion on current uses of contentious inputs, (ii) data of meta-
analyses (WP 3 and 6) (iii) experimental data from field experiments including
inventories of exchanged materials (WP1-6), (iv) confidential data related to
production of alternative inputs and associated business plans (WP 1, 2, 4 and
6), (v) data used to conduct socio-economic assessments (WP1-6, WP7) and (vi)
public data such as policy briefs, recommendations, working and final papers
on roadmaps (WP7 and 8).
Standards and Control: RELACS will define standard protocols on socio-economic
assessment for all WPs while established protocols for trials (e.g. EPPO
guidelines) will be used throughout the project. WP9 will produce and update a
"Handbook of protocols and methodology" as part of the Data Management Plan to
be used with all WPs. The aim will be to define standardized ways to collect,
analyse, store and format the data produced by RELACS partners and third
linked parties, in order to facilitate data sharing. This will be achieved in
four steps: (i) agreement on data to be collected for which purpose, (ii)
agreement on the methodology of data collection or analysis; the selected
methods/protocols will be stated in the 'living' Handbook, which will be
continuously updated during the course of the project, (iii) agreement on
format of the data files will be made for the purpose of data harmonization,
(iv) agreement on the units of measurement used to report on the data in order
to avoid a posteriori data conversion and associated potential mistakes.
Modelling & Statistical analysis: Principles and algorithms of simulation
models and statistical analysis will be described in scientific publications
according to rigorous scientific standards for transparency, reproducibility
and conciseness vs. completeness.
## Contents of the RELACS Collaborative Workspace Platform
The RELACS experimental database is composed of various files (templates
completed by each Research team) including data and metadata required. The
RELACS experimental database is a specific directory hosted on the RELACS
Collaborative Platform. Each partner has access to the RELACS Collaborative
Platform. This main directory contains the sub-directories corresponding to
all WPs and tasks; The WP leaders are responsible to deliver the rights on the
directory corresponding to their WP, for reading and writing (one of them or
both).
Each WP leader of WP1, WP2, WP3, WP4, WP5 and WP6 has the responsibility to
organize the design of adapted RELACS Templates according to the type of
experiment done in each WP. The Templates are Excel files including Data and
clear identifiers.
The Research teams have responsibility to complement the RELACS database with
experimental data collected during the RELACS project. The WP leader has the
responsibility to verify that the experimental data are appropriate and
deposed correctly and in due time by each partner on the RELACS experimental
database directory.
## Update of the process
The process presented below will necessarily be evolutionary. It will be
adapted and specified as the project RELACS evolves to take into account
unexpected situations and also the way in which the partners will take
ownership of the RELACS Collaborative Platform and the difficulties
encountered during the use of the process. The goal is that all users are not
put off by the use of the Collaborative Platform and that the RELACS
Collaborative Platform is regularly completed and that all results are
accessible to all partners with access rights.
# General rules and confidentiality
## Data secrecy, backup and data archiving
Data management and use is regulated in detail through the Consortium
Agreement (CA). The CA regulates the process of obtaining IP protection,
exploitation and revenue sharing between partners.
Specific secrecy agreements and/or material transfer agreements will be signed
among partners involved in tasks with sensitive IP and commercial issues, if
required. Confidentiality for external guests will be managed through secrecy
agreements. The WPs have been designed to optimize the use of data and avoid
conflicts of interest between partners.
Curation and preservation: Data will be deposited in existing and maintained
EU and international archives to ensure their accessibility during and long
after the project has ended. The RELACS Collaborative Platform is hosted by
FiBL on its servers in Switzerland, ensuring a long life of the data
collection and an access to future analysis by the scientific community.
## Organisation of data flows
Parties contributing to connected experiments/surveys will follow a common
protocol and provide data in a joint format. These files of raw data will
comprise all data necessary for clear identification of experiment ID,
experimental design, ownership and any information needed to (meta-)analyse
data ex post. The data files are collated in the WP specific collaborative
workspace under the supervision of each WP leader. The WP Research Teams will
use previous year’s data to adjust the protocols for next year’s experiments
as needed. The timely execution of this annual cycle is important for the WP’s
functioning.
Researchers of the WP and maybe of other WPs will request data from the
database and refine these through statistical analyses to generate results
presented in publications. This will typically be an activity near the end of
the project, and it is the responsibility of the WP researchers that the
collected data will be appropriate for statistical analysis and for scientific
publication.
Each WP leader is also in charge of the data flows inside the WP and is
responsible for delivering data to other WPs upon request. Those responsible
for the WP experiments will use previous year’s data to adjust the protocols
for the year’s experiments as needed. The timely execution of this annual
cycle is important for the WPs functioning.
# Conclusion
Curation / preservation and exploitation of data are crucial parts of the
RELACS activities and addressed by the data management plan. The RELACS
Collaborative Platform is a core tool for these activities. The successful
establishment of the RELACS Collaborative Platform and procedures for date
sharing in RELACS will be a dynamic process and will be implemented and
adapted as part of an ongoing process.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1377_SUPREMA_773499.md
|
# Executive summary
This deliverable presents the Data Management Plan (DMP) on open access data
handling by SUPREMA. Here, Open access (OA) refers to the practice of
providing on-line access to scientific information that is free of charge to
the end-user and reusable. ‘Scientific’ refers to all academic disciplines. In
the context of research & innovation, ‘scientific information’ can mean: (i)
peer-reviewed scientific research articles (published in scholarly journal) or
(ii) research data (data underlying publications, curated and raw data) (see
also: European Commission, 2017).
For SUPREMA, the DMP is defined as ‘the development, execution and supervision
of plans, policies, programmes and practices that control, protect, deliver
and enhance the value of data and information assets’ obtained.
This report describes the management of the research data collected and
generated during the project, and after it is completed. This also includes
data to be generated, methodologies and standards, data privacy/openness, and
preservation measures.
Changes with respect to the DoA
No changes with respect to the DoA
Dissemination and uptake
The deliverable is publicly available. SUPREMA does not necessarily opening up
all research data. In a sense, the document explains which of the research
data generated and/or collected will be made open.
Short Summary of results
DataM is the European Commission’s data portal of agro-economic modelling. It
contains the outcomes of research activities, and is operated by the Joint
Research Centre (JRC) of the European Commission. DataM, including the web
portal but also the Information System, will be used to release model runs
that are considered for open access release. A baseline comparison and
harmonization action will be addressed for all models represented in SUPREMA.
A medium-term (until 2030) assessment of European agricultural policy
alternatives will cover CAPRI, IFM-CAP and AGMEMOD-MITERRA Europe. Finally,
SUPREMA will also use different modelling tools for the long-term (until 2050)
assessment of climate change goals, using GLOBIOM and MAGNET as leading
models. Compared to the first version of the data management plan (submitted
in Month 6, June 2018), update of the data management plan is updated,
according to the current scenario in Month 12 (December 2018). A new section 4
is added.
Evidence of accomplishment
The deliverable itself can act as the evidence of accomplishment.
## Glossary / Acronyms
<table>
<tr>
<th>
AGMEMOD
</th>
<th>
AGRICULTURE MEMBERSTATES MODELLING
</th> </tr>
<tr>
<td>
AGMIP
</td>
<td>
AGRICULTURAL MODEL INTERCOMPARISON AND IMPROVEMENT PROJECT
</td> </tr>
<tr>
<td>
BI
</td>
<td>
BUSINESS INTELLIGENCE
</td> </tr>
<tr>
<td>
CA
</td>
<td>
CONSORTIUM AGREEMENT
</td> </tr>
<tr>
<td>
CAPRI
</td>
<td>
COMMON AGRICULTURAL POLICY REGIONALISED IMPACT MODELLING SYSTEM
</td> </tr>
<tr>
<td>
CSV
</td>
<td>
COMMA SEPARATED VALUES
</td> </tr>
<tr>
<td>
DBA
</td>
<td>
DATABASE ADMINISTRATOR
</td> </tr>
<tr>
<td>
DG
</td>
<td>
DIRECTORATE GENERAL
</td> </tr>
<tr>
<td>
DG AGRI
</td>
<td>
DIRECTORATE GENERAL FOR AGRICULTURE AND RURAL DEVELOPMENT
</td> </tr>
<tr>
<td>
DG COMM
</td>
<td>
DIRECTORATE GENERAL FOR COMMUNICATION
</td> </tr>
<tr>
<td>
DMP
</td>
<td>
DATA MANAGEMENT PLAN
</td> </tr>
<tr>
<td>
EUROCARE
</td>
<td>
EUROPEAN CENTRE FOR AGRICULTURAL, ENVIRONMENTAL AND REGIONAL RESEARCH
</td> </tr>
<tr>
<td>
GA
</td>
<td>
GRANT AGREEMENT
</td> </tr>
<tr>
<td>
GDP
</td>
<td>
GROSS DOMESTIC PRODUCT
</td> </tr>
<tr>
<td>
GLOBIOM
</td>
<td>
GLOBAL BIOSPHERE MANAGEMENT MODEL
</td> </tr>
<tr>
<td>
IFM-CAP
</td>
<td>
INDIVIDUAL FARM MODEL FOR COMMON AGRICULTURAL POLICY
</td> </tr>
<tr>
<td>
IIASA
</td>
<td>
INTERNATIONAL INSTITUTE FOR APPLIED SYSTEMS ANALYSIS
</td> </tr>
<tr>
<td>
IPR
</td>
<td>
INTELLECTUAL PROPERTY RIGHTS
</td> </tr>
<tr>
<td>
JRC
</td>
<td>
JOINT RESEARCH CENTRE
</td> </tr>
<tr>
<td>
LCA
</td>
<td>
LIFE CYCLE ANALYSIS
</td> </tr>
<tr>
<td>
LULUCF
</td>
<td>
LAND USE, LAND-USE CHANGE, AND FORESTRY
</td> </tr>
<tr>
<td>
MAGNET
</td>
<td>
MODULAR APPLIED GENERAL EQUILIBRIUM TOOL
</td> </tr>
<tr>
<td>
NUTS
</td>
<td>
NOMENCLATURE OF TERRITORIAL UNITS FOR STATISTICS
</td> </tr>
<tr>
<td>
OA
</td>
<td>
OPEN ACCESS
</td> </tr>
<tr>
<td>
SUPREMA
</td>
<td>
SUPPORT FOR POLICY RELEVANT MODELLING OF AGRICULTURE
</td> </tr>
<tr>
<td>
UAA
</td>
<td>
UTILIZED AGRICULTURAL AREA
</td> </tr>
<tr>
<td>
WR
</td>
<td>
WAGENINGEN RESEARCH
</td> </tr> </table>
# 1 Introduction
## 1.1 Structure of the document
Section 1.2 will outline the need for a Data Management Plan in SUPREMA, and
proposes the use of DataM for the release of public datasets that is traceable
through the publication of related meta-data in the JRC Data catalogue and in
major European open data portals. Section 2 will present the DataM Information
System, including the data management tool, and the software used. Moreover,
the DataM portal is presented, its governance and architecture, as well as
data privacy considerations. Section 3 will summarise the steps for open
release of scenarios from CAPRI, GLOBIOM, MAGNET, AGMEMOD, MITERRA-EUROPE and
IFM-CAP, including IPR. Main conclusions are presented in Section 4 of the
report.
1.2 Why is a Data Management Plan needed?
SUPREMA participates in the Open Access and the Open Research Data Pilot of
the European Research Council (ERC). From 2017 all H2020 projects will
participate in a pilot project to make the underlying data related to project
outputs openly available for use by other researchers, innovative industries
and citizens ( _https://www.openaire.eu/what-is-the-open-research-data-pilot_
) . According to the Open Research Data Pilot, ‘Open data is data that is
free to access, reuse, repurpose, and redistribute. The Open Research Data
Pilot aims to make the research data generated by Horizon 2020 projects
accessible with as few restrictions as possible, while at the same time
protecting sensitive data from inappropriate access.’ (
_htps://www.openaire.eu/what-is-the-open-research-data-pilot_ ) . Data will
be released in open formats, with proper documentation to support their use in
other research. After the project completion, and if there is no objection by
any of the project partners and use anonymization is preserved, the data are
foreseen to be published in an Open Data portal (for example in
_http://opendata.europa.eu_ ) for future research.
The DMP specifies the implementation of the pilot for: data generated and
collected, standards in use, workflow to make data accessible for use, reuse
and verification by the community, and definition of a strategy of curation
and preservation of the data. Therefore, we refer to the SUPREMA Grant
Agreement (GA), Article 29.3 on “Open Access to research data”:
Regarding the digital research data generated in the action (‘data’), the
beneficiaries must:
1. deposit in a research data repository and take measures to make it possible for third parties to access, mine, exploit, reproduce and disseminate — free of charge for any user — the following:
* the data, including associated metadata, needed to validate the results presented in scientific publications as soon as possible;
* other data, including associated metadata, as specified and within the deadlines laid down in the 'data management plan'.
2. provide information — via the repository — about tools and instruments at the disposal of the beneficiaries and necessary for validating the results (and — where possible — provide the tools and instruments themselves).
This does not change the obligation to protect results in Article 27, the
confidentiality obligations in Article 36, the security obligations in Article
37 or the obligations to protect personal data in Article 39, all of which
still apply.
As an exception, the beneficiaries do not have to ensure open access to
specific parts of their research data if the achievement of the action's main
objective, as described in Annex 1, would be jeopardised by making those
specific parts of the research data openly accessible. In this case, the data
management plan must contain the reasons for not giving access.
The data management policy described in this document reflects the current
state of consortium agreement on data management. Data will be stored in a
database developed by JRC (DataM). Project participants will have secured web
access to the databases, which will have been automatically checked for
consistency, homogeneity and completeness. After project completion, and in
case of no objection by project partners, DataM dataset can be open to the
public for future research always consistent with exploitation and
Intellectual Property Rights (IPR) requirements. DataM public datasets will be
made traceable through the publication of related meta-data in the JRC Data
catalogue ( _http://data.jrc.ec.europa.eu_ ) and in major European open data
portals (EU open data portal, _http://data.europa.eu_ and European Data
Portal, _https://www.europeandataportal.eu_ ) .
Figure 1 - DataM and open data
# 2 DataM
DataM is publicly known as a web site ( _https://datam.jrc.ec.europa.eu_ ) :
it is the European Commission's data portal of agro-economic modelling. DataM
contains model's data and estimates about the economics of agriculture and of
the sustainable resources. By definition, DataM does not deal with official
statistical data. DataM contents are the outcomes of research activities.
Indeed, DataM is operated by JRC, the Joint Research Centre of the European
Commission. Data is presented both in terms of raw CSV datasets, ready to
download, and in the form of advanced interactive dashboards or interactive
infographics that allow the self-analysis of data.
## 2.1 The DataM Information System
Internally in JRC, and in the context of SUPREMA, the term DataM does not
refer only to the web portal but to the Information System in broader terms.
The DataM Information System includes also a "data management tool" and a
"Business Intelligence tool".
Figure 2 - DataM Information System
### 2.1.1 Life-cycle of scientific data and DataM
We can consider the life cycle of scientific data as composed by three main
phases: construction, analysis and dissemination. The DataM Information System
deals principally with the last two.
Figure
3
\-
Life
\-
cycle of data
Actually the processes of strict construction of the data is arbitrary and
depends on the individual scientific activities. Typical DataM outcomes are
the result of modelling activities (i.e. GAMS software processing). However
DataM contains also results of data processing, following a scientific
methodology, over data coming from other sources. The following table (Table
1) lists the origin of DataM contents at time of writing.
<table>
<tr>
<th>
#
</th>
<th>
DataM Content
</th>
<th>
Raw datasets
</th>
<th>
Interactive Dashboard
</th>
<th>
Interactive infographics
</th>
<th>
Origin
</th> </tr>
<tr>
<td>
1
</td>
<td>
AGMIP - Agricultural model intercomparison and improvement project - Phase 1
</td>
<td>
Yes
</td>
<td>
Yes
</td>
<td>
No
</td>
<td>
Modelling (CAPRI, IMAGE, GLOBIOM, MAGNET, MAgPIE)
</td> </tr>
<tr>
<td>
2
</td>
<td>
AGMIP – Food insecurity and global climate change mitigation policy
</td>
<td>
Yes (coming soon)
</td>
<td>
No
</td>
<td>
No
</td>
<td>
Modelling (AIM, CAPRI, EPPA, ENVISAGE, FARM,
GLOBIOM, GCAM, GTEM, IMPACT, MAGPIE, MAGNET)
</td> </tr>
<tr>
<td>
3
</td>
<td>
ASGTS_KENYA
</td>
<td>
No
</td>
<td>
No
</td>
<td>
Yes (coming soon)
</td>
<td>
Modelling (CGE)
</td> </tr>
<tr>
<td>
4
</td>
<td>
DG AGRI-JRC - Production, trade and apparent use
</td>
<td>
Yes
</td>
<td>
Yes
</td>
<td>
No
</td>
<td>
Application of DG AGRI experts' coefficients and DG
AGRI/JRC methodolgoy to combine COMEXT data with DG AGRI data (Short Term
Outlook)
</td> </tr>
<tr>
<td>
5
</td>
<td>
FOODSECURE
</td>
<td>
No
</td>
<td>
No
</td>
<td>
Yes
</td>
<td>
Modelling (IMAGE, GLOBIOM, MAGNET)
</td> </tr>
<tr>
<td>
6
</td>
<td>
FTA
</td>
<td>
No
</td>
<td>
No
</td>
<td>
Yes
</td>
<td>
Modelling (AGLINK-COSIMO, MAGNET)
</td> </tr>
<tr>
<td>
7
</td>
<td>
JRC - AgCLIM50 - Phase 1
</td>
<td>
Yes
</td>
<td>
Yes
</td>
<td>
No
</td>
<td>
Modelling (CAPRI, IMAGE, GLOBIOM, MAGNET, MAgPIE)
</td> </tr>
<tr>
<td>
8
</td>
<td>
JRC - Bioeconomics
</td>
<td>
Yes
</td>
<td>
No
</td>
<td>
Yes
</td>
<td>
Application of calculations over publicly available data (mainly EUROSTAT
COMEXT) as from JRC methodology based on coefficients provided by NOVA
institute
</td> </tr>
<tr>
<td>
9
</td>
<td>
JRC - Biomass estimates
</td>
<td>
Yes
</td>
<td>
No
</td>
<td>
No
</td>
<td>
Application of calculations over publicly available data
(mainly FAO Prodstat) as from JRC/NOVA methodology
</td> </tr>
<tr>
<td>
10
</td>
<td>
JRC - Biomass uses and flows
</td>
<td>
Yes (coming soon)
</td>
<td>
Yes
</td>
<td>
No
</td>
<td>
Complex integration of multi-source data following a JRC methodology
</td> </tr>
<tr>
<td>
11
</td>
<td>
JRC - BioSAMs for the EU Member States - 2010
</td>
<td>
Yes
</td>
<td>
Yes
</td>
<td>
No
</td>
<td>
Elaboration of the SAM by CGE team at JRC-Economics of agriculture department
</td> </tr>
<tr>
<td>
12
</td>
<td>
JRC - Matrice de comptabilité sociale - Kenya - 2014
</td>
<td>
Yes
</td>
<td>
Yes
</td>
<td>
No
</td>
<td>
Elaboration of the SAM by CGE team at JRC-Economics of agriculture department
</td> </tr>
<tr>
<td>
13
</td>
<td>
JRC - Matrice de comptabilité sociale - Sénégal - 2014
</td>
<td>
Yes
</td>
<td>
Yes
</td>
<td>
No
</td>
<td>
Elaboration of the SAM by CGE team at JRC-Economics of agriculture department
</td> </tr>
<tr>
<td>
14
</td>
<td>
JRC - Social accounting matrix - Kenya - 2014
</td>
<td>
Yes
</td>
<td>
Yes
</td>
<td>
No
</td>
<td>
Elaboration of the SAM by CGE team at JRC-Economics of agriculture department
</td> </tr>
<tr>
<td>
15
</td>
<td>
JRC - Social accounting matrix - Senegal - 2014
</td>
<td>
Yes
</td>
<td>
Yes
</td>
<td>
No
</td>
<td>
Elaboration of the SAM by CGE team at JRC-Economics of agriculture department
</td> </tr>
<tr>
<td>
16
</td>
<td>
SCENAR2030
</td>
<td>
Yes
</td>
<td>
Yes
</td>
<td>
Yes
</td>
<td>
Modelling (CAPRI, IFM-CAP, MAGNET)
</td> </tr>
<tr>
<td>
17
</td>
<td>
DEVELOPING COUNTRIES' FICHES
</td>
<td>
No
</td>
<td>
No
</td>
<td>
Yes (coming soon)
</td>
<td>
Integration of data of various sources
</td> </tr> </table>
Table 1 - DataM contents (June 2018)
For those contents whose source is not modelling, data construction is based
on ad-hoc techniques. In these cases, the typical technologies in use are: (i)
the python language (data extraction from file or web sources, and data
crunching); and (ii) database tools such as Oracle and SQL-lite (data
crunching in SQL or PL/SQL).
DataM Information System is used for the final integration of data into
consolidated datasets, and for the post-construction part. DataM can have also
an impact in the construction phase since DataM datasets can work as input for
further elaborations.
### 2.1.2 The DataM data management tool
With "data management tool" we mean a software layer above the data base
management system that factorizes common needs for the management of
(scientific) datasets.
The DataM data management tool allows:
* loading data from external sources.
* storing data in standard format (time-series oriented star-diagram with one unique measure and arbitrary dimensions; time and "indicator" are mandatory dimensions)
* avoiding "data manipulation" operations (Creating, modifying, dropping tables and indexes) / no need of DBA support
* managing dictionaries of common reference data – allowing harmonization of data by:
* mapping different nomenclatures o converting units of measurement o aligning different granularities o aligning different taxonomies
* Managing data versioning
* Managing standard meta data for all contents o Description
* Contact point o Geographical coverage 1 o Time coverage o Copyright o Update frequency
* Domain
* Keyword (tags for search operations in open data portals) o Contributors:
* Name, Surname
* Email
* ORCID o Distributions:
* Link for bulk download of raw data
* Link for interactive data download
* Link to interactive dashboard or infographic o Related publications / methodology documents
* Title
* Authors
* DOI
Figure 4 - Data management tool
The data management tool of DataM is inspired to the time-series-oriented,
reference data dictionaries and star-diagram data structures managed by the
formerly used "PROGNOZ platform" software (principles that are common to most
data warehouse technologies), while integrating the advanced management of
standard meta-data.
The current data management tool of DataM does not include features for:
* Extracting data from arbitrary sources and transforming it in a structure suitable for the load in the database. Python routines are developed at this purpose.
* Workflow management for the collaborative construction of datasets by distributed working groups.
* Management of big data
* Management of geo-spatial data
* Management of arbitrary data structures that do not fit into a star-diagram
* Management of multilingualism
### 2.1.3 The DataM BI tool: Qlik-Sense
Qlik-Sense is a modern software for business intelligence (BI). It is
considered one of the top leaders on the BI market. 2 In use in many
Directorate General, is a standard de-facto in the European Commission.
Business intelligence (BI) comprises the strategies and technologies used by
enterprises for the data analysis of business information. BI technologies
provide historical, current and predictive views of business operations.
Common functions of business intelligence technologies include reporting,
online analytical processing, analytics, data mining, process mining, complex
event processing, business performance management, benchmarking, text mining,
predictive analytics and prescriptive analytics.
JRC is adopting the BI for scientific purposes, in particular, in DataM BI is
used for:
* Analysing the constructed datasets:
* Comparison with other sources o Outliers detection
* Check of the respect of "business rules"
* Exchanging results in proper way within the scientific circuit or with other stake-holders (i.e.: policy DG's)
* Disseminating results on the web allowing user to make self-analysis of data:
* interactive dashboards = dashboards with interrelated charts/tables/maps with scarce or absent narrative
* Interactive infographics = web pages with much narrative and embedded Qlik contents.
### 2.1.4 The DataM portal
The DataM portal ( _https://datam.jrc.ec.europa.eu_ ) is an official part of
the European Commission web presence 3 ; it has been recently approved by DG
COMM and refactored following EC web standards and is inspired to the
principles of the knowledge-for-policy platform 4 being developed by JRC. It
offers the following functionalities:
* Search contents by specifying keywords
* Download of raw data:
* Bulk download, zip file with CSV with data, citation text, legal notice and all metadata o Interactive data download by filtering the parts of interest of the dataset o Obtaining automatically the citation text, for correctly citing the dataset in scientific publications
* Visualizing the meta-data o Visualizing the copyright notice
* Accessing the related publications / methodology documents
* Accessing the Qlik interactive dashboards and infographics
* API for automatic synchronization of the meta-data with the JRC data catalogue (and subsequent dissemination to EU open data portal and European Data Portal).
* API for transmitting data on-demand to other computer systems
### 2.1.5 DataM governance
At time of writing, DataM is developed and managed by a team of four IT
professionals / data scientists working under the JRC department for Economics
of Agriculture within the directorate for Sustainable Resources. It is powered
by JRC IT, in particular by the staff (and on machines) of the JRC Seville
site.
### 2.1.6 Architecture of the data base
The data base of DataM underlying the data management tool is implemented in
Oracle. Each dataset is basically implemented in 2 tables, one table where
each record identifies a time-series, and a table where each record is a data-
point (time-series identifier, time key, value). Reference data is organized
in specific tables.
The strength of this model is the plasticity: it suits to host almost all
typical datasets of our domain with a simple, common, structure.
### 2.1.7 2018 plan for DataM
Activities on DataM in 2018 will focus on:
* Implementation of a Qlik-based system for the data quality check of model outcomes. At time of writing, works are in progress for the check of CAPRI outcomes
* Implementation of a data connector for the direct integration of GAMS and Qlik (GAMS2QLIK project, in progress)
* Refactoring of the data management tool (not in progress, the project is under inception and could encompass 2018 and 2019)
### 2.1.8 Data privacy
DataM is mainly thought for public dissemination of open data, which means
that most of functionalities do not require a login. However, by logging in,
authorised users can access restricted contents that require specific access
rights. Restrictions normally apply to contents still under study, and this
can be the case for the SUPREMA project: contents under preparation can be
shared within the SUPREMA community by accessing the restricted area of DataM.
The username and password are obtained through the European Commission's user
authentication service (EU Login).
For other users, an EU Login account can be obtained through a simple
registration procedure:
Click on "Log In" on the top-right corner of the screen Click on "Create an
account" and follow the instructions.
# 3 Open access release of scenarios
## 3.1 List of variables by model
A list of variables is defined for each of the models that are considered for
open access release. A baseline comparison and harmonization action will be
addressed for all models represented in SUPREMA (Task 3.1 – Inter-model
baseline comparison and harmonization). In addition, a medium-term (until 203)
assessment of European agricultural policy alternatives will cover CAPRI, IFM-
CAP, and AGMEMOD-MITERRA Europe (Task 3.2 – Using SUPREMA for a medium-term
assessment of European agricultural policy alternatives). Finally SUPREMA will
also use the different modelling tools for the longterm (until 2050)
assessment of climate change goals (Task 3.3), using GLOBIOM and MAGNET as
leading models. All scenarios are available in Month 28 of the project, and
will be released open access soon afterwards.
### 3.1.1 CAPRI (Common Agricultural Policy Regionalised Impact Modelling
System)
The CAPRI database and scenario output covers a multitude of very diverse
variables that may be grouped in various ways. Table 1 indicates three major
groups:
1. Key physical technological information covers production related, input related, market balance related variables in shades of brown
2. Economic variables are various prices, price elements and values (price times quantities) in shades of blue
3. Derived indicators for the environment and food security in shades of green.
Some of these will be available from other modelling systems as well and
therefore of interest for the wider research community and presentation in
Data-M. Other parts of the CAPRI database and model output are only useful for
CAPRI experts as they require a thorough understanding of CAPRI accounting
rules and definitions. While the CAPRI database and also SUPREMA model outputs
will be fully available for download upon request from the CAPRI versioning
system, any obligation to provide technical explanations and advice on the
detailed information cannot be accepted but interested individuals are invited
to attend the CAPRI training sessions where such technical information is
provided to the attendants. In the process of filling the database for
presentation via Data-M it will be decided which variables are presented there
and should be displayed in an transparent manner.
Table 1. Main Elements of the CAPRI database and scenario output
<table>
<tr>
<th>
</th>
<th>
**Activities (only EU)**
</th>
<th>
**LULUCF**
**(only**
**EU)**
</th>
<th>
**Nutrient balances (only EU)**
</th>
<th>
**Farm- and market balances**
</th>
<th>
**Area use and yields**
</th>
<th>
**GHG component**
</th>
<th>
**Prices and**
**tariffs**
</th>
<th>
**Value information**
</th> </tr>
<tr>
<td>
**Outputs**
</td>
<td>
Output coefficients
</td>
<td>
</td>
<td>
</td>
<td>
Production, seed and feed use, other internal use, losses, stock changes,
total and bilateral exports and imports, human consumption, processing,
TRQs
</td>
<td>
Crop areas and yields by product
</td>
<td>
GHG
effects per product
</td>
<td>
Unit value prices from the EAA with and without subsidies
and taxes,
tariffs
applied and bindings
</td>
<td>
Value of outputs with or without subsidies and taxes linked to
production, consumer expenditure, contributions of products to welfare
indicators,
PSE components, premiums per product
</td> </tr>
<tr>
<td>
**Inputs and mitigation technologies**
</td>
<td>
Input coefficients and
implementation shares
</td>
<td>
</td>
<td>
Balance components by NPK
</td>
<td>
Purchases, internal deliveries
</td>
<td>
</td>
<td>
</td>
<td>
Unit value prices from the EAA with and without subsidies and taxes
</td>
<td>
Value of inputs with or without subsidies
and taxes
link to input use
</td> </tr>
<tr>
<td>
**GHG components**
</td>
<td>
GHG effects per activity
</td>
<td>
GHG
effects per area type
</td>
<td>
</td>
<td>
Total GHG effects from activities
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Human**
**nutrients**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
Nutrient consumption per capita
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Income indicators**
</td>
<td>
Revenues, costs, Gross Value Added, premiums
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
Total revenues, costs, gross value added, subsidies, taxes,
premium
ceilings and ceiling use, PSE and welfare components
</td> </tr>
<tr>
<td>
**Activity levels and totals**
</td>
<td>
Hectares, slaughtered heads or herd sizes
</td>
<td>
Hectares
</td>
<td>
</td>
<td>
</td>
<td>
Total agricultural area
</td>
<td>
Total GHG effects from products
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Secondary products**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
Marketable production, losses, stock changes, exports and
imports, human consumption, processing
</td>
<td>
</td>
<td>
</td>
<td>
Consumer
prices, market prices, import prices
</td>
<td>
consumer expenditure, contributions of products to welfare indicators,
</td> </tr> </table>
### 3.1.2 GLOBIOM (Global Biosphere Management Model)
GLOBIOM could provide the standard AgMIP (Agricultural Model Intercomparison
and Improvement Project) reporting used also by some other models (e.g. CAPRI,
MAGNET), which covers a comprehensive set of economic and environmental
indicators. In addition, GLOBIOM can also provide more detailed reporting for
specific topics of interest i.e. biomass use & climate change mitigation. An
overview of the main elements of the GLOBIOM database and scenario output is
presented in Tables below.
Table 3. Main Elements of the GLOBIOM database and scenario output
<table>
<tr>
<th>
</th>
<th>
**Activities**
</th>
<th>
**Nutrient balances**
</th>
<th>
**Market balances**
</th>
<th>
**Area use and yields**
</th>
<th>
**GHG**
**sector**
</th>
<th>
**Prices and**
**tariffs**
</th>
<th>
**Value information**
</th> </tr>
<tr>
<td>
**Outputs**
</td>
<td>
Outputs for agriculture and forestry related activities
</td>
<td>
Fertilizer use
</td>
<td>
Production, feed use,
other uses, human consumption, processing, exports and imports for agriculture
and forestry
</td>
<td>
Land use and land use change, crop areas, pasture, productivities
</td>
<td>
GHG
effects for
CH4, N2O, and CO2 emissions
from AFOLU
</td>
<td>
Unit value prices
</td>
<td>
Value of outputs
linked to production, contributions of products to welfare indicators,
</td> </tr>
<tr>
<td>
**Inputs**
</td>
<td>
Input coefficients for different activities
</td>
<td>
Nitrogen balance components
</td>
<td>
</td>
<td>
Land cover information, crop areas and yields
</td>
<td>
Emission factors
</td>
<td>
Macroeconomic drivers (population, GDP,
technological change), carbon prices
</td>
<td>
</td> </tr>
<tr>
<td>
**GHG components**
</td>
<td>
GHG effects per activity
</td>
<td>
Synthetic fertilizer use,
manure applied and
dropped to soils
</td>
<td>
GHG
emissions associated to agricultural and forestry production
</td>
<td>
Deforestation and other land use changes, dedicated energy plantations
</td>
<td>
GHG
emissions associated
to crop- and
livestock, forestry and land use changes
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Human**
**nutrients**
</td>
<td>
Consumption per capita by product
</td>
<td>
</td>
<td>
Total human consumption, consumption per capita
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Income indicators**
</td>
<td>
Value of outputs linked to production, contributions of products to welfare
indicators
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
Value of outputs
linked to production, contributions of products to welfare indicators
</td> </tr> </table>
GLOBIOM will propose using the AgMIP reporting template, which has been used
for model comparison in the past. So, it is a format in which all models need
to deliver output data.
<table>
<tr>
<th>
**Indicator**
</th>
<th>
**Variable**
</th>
<th>
**Description**
</th>
<th>
**Unit**
</th> </tr>
<tr>
<td>
_**Economic** _
</td>
<td>
POPT
</td>
<td>
Total population
</td>
<td>
million
</td> </tr>
<tr>
<td>
</td>
<td>
GDPT
</td>
<td>
Total GDP (MER)
</td>
<td>
bn USD
</td> </tr>
<tr>
<td>
</td>
<td>
XPRP
</td>
<td>
Producer price/input price
</td>
<td>
USD/t
</td> </tr>
<tr>
<td>
</td>
<td>
CONS
</td>
<td>
Domestic use
</td>
<td>
1000 t
</td> </tr>
<tr>
<td>
</td>
<td>
FOOD
</td>
<td>
Food use
</td>
<td>
1000 t
</td> </tr>
<tr>
<td>
</td>
<td>
FEED
</td>
<td>
Feed use
</td>
<td>
1000 t
</td> </tr>
<tr>
<td>
</td>
<td>
OTHU
</td>
<td>
Other use
</td>
<td>
1000 t
</td> </tr>
<tr>
<td>
</td>
<td>
NETT
</td>
<td>
Net trade
</td>
<td>
1000 t
</td> </tr>
<tr>
<td>
</td>
<td>
IMPO
</td>
<td>
Imports
</td>
<td>
1000 t
</td> </tr>
<tr>
<td>
</td>
<td>
EXPO
</td>
<td>
Exports
</td>
<td>
1000 t
</td> </tr>
<tr>
<td>
</td>
<td>
CALO
</td>
<td>
p.c. calorie availability
</td>
<td>
kcal/cap/d
</td> </tr>
<tr>
<td>
_**Production** _
</td>
<td>
PROD
</td>
<td>
Production
</td>
<td>
1000 t
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
AREA
</td>
<td>
Area harvested
</td>
<td>
1000 ha
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
YEXO
</td>
<td>
Exogenous crop yield
</td>
<td>
t/ha
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
LYXO
</td>
<td>
Exogenous livestock yield trend
</td>
<td>
kg prt/ha
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
YILD
</td>
<td>
Crop yield
</td>
<td>
t/ha
</td> </tr>
<tr>
<td>
</td>
<td>
LYLD
</td>
<td>
Livestock yield (endogenous)
</td>
<td>
kg prt/ha
</td> </tr>
<tr>
<td>
_**Environment** _
</td>
<td>
LAND
</td>
<td>
Land cover information (cropland, grassland, forestry, other natural
vegetation)
</td>
<td>
1000 ha
</td> </tr>
<tr>
<td>
</td>
<td>
EMIS
</td>
<td>
Total GHG emissions from AFOLU
</td>
<td>
MtCO2e
</td> </tr>
<tr>
<td>
</td>
<td>
ECO2
</td>
<td>
Total CO2 emissions from land use changes
</td>
<td>
MtCO2e
</td> </tr>
<tr>
<td>
</td>
<td>
ECH4
</td>
<td>
Total CH4 emissions from crop- and livestock production
</td>
<td>
MtCO2e
</td> </tr>
<tr>
<td>
</td>
<td>
EN2O
</td>
<td>
Total N2O emissions from crop- and livestock production
</td>
<td>
MtCO2e
</td> </tr>
<tr>
<td>
</td>
<td>
FRTN
</td>
<td>
Fertiliser N
</td>
<td>
1000 t
</td> </tr>
<tr>
<td>
</td>
<td>
WATR
</td>
<td>
Water for irrigation
</td>
<td>
km3
</td> </tr> </table>
3.1.3 MAGNET (Modular Applied General Equilibrium Tool)
MAGNET will propose using the AgMIP reporting template, which has been used
for model comparison in the past. So, it is a format in which all models need
to deliver output data. MAGNET projection periods include: 2011-2020;
2020-2030; 2030-2040; and 2040-2050.
**Reporting sectors**
_**Code Description** _
<table>
<tr>
<th>
RIC
</th>
<th>
Rice (paddy equivalent)
</th> </tr>
<tr>
<td>
WHT
</td>
<td>
Wheat
</td> </tr>
<tr>
<td>
CGR
</td>
<td>
Other cereal grains
</td> </tr>
<tr>
<td>
OSD
</td>
<td>
Oilseeds (raw equivalent)
</td> </tr>
<tr>
<td>
SGC
</td>
<td>
Sugar crops (raw equivalent)
</td> </tr>
<tr>
<td>
VFN
</td>
<td>
Vegetables, fruits, nuts (incl. roots and tubers)
</td> </tr>
<tr>
<td>
PFB
</td>
<td>
Plant based fibres
</td> </tr>
<tr>
<td>
ECP
</td>
<td>
Energy crops
</td> </tr>
<tr>
<td>
OCR
</td>
<td>
Other crops
</td> </tr>
<tr>
<td>
RUM
</td>
<td>
Ruminant meats
</td> </tr>
<tr>
<td>
NRM
</td>
<td>
Non ruminant meats
</td> </tr>
<tr>
<td>
DRY
</td>
<td>
Dairy (raw milk equivalent)
</td> </tr>
<tr>
<td>
OAP
</td>
<td>
Other animal products (wool, honey)
</td> </tr>
<tr>
<td>
GRS
</td>
<td>
Grass
</td> </tr>
<tr>
<td>
OFD
</td>
<td>
Other feed products
</td> </tr>
<tr>
<td>
FSH
</td>
<td>
Fish
</td> </tr>
<tr>
<td>
FOR
</td>
<td>
Forestry products
</td> </tr> </table>
**Sectors subcategories (same variables as parents)**
<table>
<tr>
<th>
VFN|VEG
VFN|FRU
VFN|NUT
NRM|PRK
NRM|PTM
NRM|EGG
</th> </tr> </table>
Vegetables
Fruits
Nuts
Pork meat
Poultry meat
Poultry eggs
<table>
<tr>
<th>
NRM|ONR
</th> </tr> </table>
Other non-ruminant
**Sectors aggregates**
CRP All crops
LSP Livestock products
AGR All agricultural products
TOT Total (full economy, population, GDP, calories)
<table>
<tr>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
**LAND variable items**
</td>
<td>
</td> </tr>
<tr>
<td>
CRP
</td>
<td>
Cropland _(including energy crops_ )
</td> </tr>
<tr>
<td>
GRS
</td>
<td>
Grassland
</td> </tr>
<tr>
<td>
ONV
</td>
<td>
Other natural land
</td> </tr>
<tr>
<td>
FOR
</td>
<td>
Managed and primary forest
</td> </tr>
<tr>
<td>
NLD
</td>
<td>
Non arable land (desert, built-up areas…)
</td> </tr>
<tr>
<td>
**LAND aggregates/subitems**
</td>
<td>
</td> </tr>
<tr>
<td>
AGR
</td>
<td>
Cropland + grassland
</td> </tr>
<tr>
<td>
ECP
**Production factors and intermediates**
</td>
<td>
Energy crops (included in cropland)
</td> </tr>
<tr>
<td>
LAB
</td>
<td>
Labor
</td> </tr>
<tr>
<td>
CAP
</td>
<td>
Capital
</td> </tr>
<tr>
<td>
FRT
</td>
<td>
Fertiliser
</td> </tr>
<tr>
<td>
OIL
</td>
<td>
Fossil fuel
</td> </tr>
<tr>
<td>
**GHG emissions sources**
</td>
<td>
</td> </tr> </table>
<table>
<tr>
<th>
ENT
MMG
RCC
SFR
MAS
MGR
CRS
ORS
BSV
BCR
LAD
LFS
LOT
CFT
CMG
RMG
COT
</th> </tr> </table>
Enteric Fermentation
Manure Management
Rice Cultivation
Synthetic Fertilizers
Manure applied to Soils
Manure left on Pasture
Crop Residues
Cultivation of Organic Soils
Burning - Savanna
Burning - Crop Residues
**GHG mitigation**
**technologies**
Livestock anaerobic digesters
Livestock feed supplements
Livestock other
Crop improved fertilization
Improved cropping management
Crop improved rice cultivation
Crop other
3.1.4 AGMEMOD (Agriculture Memberstates Modelling)
Economic indicators
* real GDP (index)
* production costs (€/ha; €/kg)
* returns (€/ha)
* prices (€/100 kg)
* yields/crop (ton/ha)
* yields/animal (kg/animal)
* production (1,000 ton)
* food use (1,000 ton)
* feed use (1,000 ton)
* seed use (1,000 ton)
* consumption/capita (kg/head)
* industrial use (1,000 ton)
* exports (1,000 ton)
* imports (1,000 ton) – area harvested (ha)
* self-sufficiency rate (ratio)
* net trade (1,000 ton)
Social indicators
* population (million inhabitants)
Resources/inputs
* land (ha)
* herd size (animals)
Policy support
* budgetary enveloppes/ceilings (1,000 €)
* coupled payments (1,000 €)
* support reaction price (€/kg)
### 3.1.5 MITERRA-Europe
MITERRA-Europe, an environmental impact assessment model for agriculture, will
provide results on a range of environmental indicators, including:
* Greenhouse gas emissions (CO 2 , N 2 O and CH 4 )
* Changes in soil organic carbon
* NH 3 emissions
* Nutrient balances (N and P)
* Nitrogen leaching and runoff
* NO 3 concentration groundwater
* Critical load exceedance
* Soil erosion
* Soil metal balances (Cadmium, Chrome, Copper, Zinc, Lead and Nickel)
* (maybe a land use based biodiversity indicator)
These results will be made available at country and regional (NUTS2 level) and
will be expressed per ha UAA, as well as total for the region or country. In
addition, the greenhouse gas emissions, and Nitrogen indicators can also be
provided per agricultural product, based on an LCA (life cycle analysis)
approach.
In respect to the new EU climate policies, the GHG results will also be
presented per sector (Agriculture and LULUCF) at national level following the
accounting rules.
Regarding Open Access, all resulting output indicators as used for baseline
and scenario tasks, will be provided as Open Access. Almost all of the input
data is derived from public data sources and can be considered therefore as
Open Access. The model itself is, is not open access, as it is a research
model and lacks the proper user interface and manuals to be used by others.
### 3.1.6 IFM-CAP (Individual Farm Model for Common Agricultural Policy)
The following variables are available for IFM-CAP but only at regional/MS/EU
and farm type levels. Individual data cannot be made public:
_Agronomic/structural indicators:_
* Land allocation/crop area (ha)
* Herd size/animal number (heads)
* Livestock density (LU/ha)
* Share of arable land in Utilized Agricultural Area
* Share of grassland in Utilized Agricultural Area
* Land use change (ha)
* Agricultural production (Tons)
* Intermediate Input use (Tons) _Economic indicators:_
* Agricultural output (€)
* CAP first pillar subsidies (€)
* CAP second pillar subsidies (€)
* Intermediate input costs (€)
* Variable costs (€)
* Total costs (€)
* Gross farm income (€)
* Net Farm Income (€) _Environmental indicators:_
* Biodiversity index (index)
* Soil erosion (tonnes)
## 3.2 Intellectual Property Rights (IPR)
Intellectual Property Rights (IPR) will receive special attention from the
beginning. All rules regarding management of knowledge and IPR will be
governed by the Consortium Agreement (CA). SUPREMA was based on DESCA
(Consortium Agreement Model) H2020 model for the Consortium Agreement (CA).
SUPREMA will adhere to the rules laid down in Annex II of the Grant Agreement.
The CA will address background and foreground knowledge, ownership, protected
third party components of the products, and protection, use and dissemination
of results and access rights.
The following principles will be applied:
* Pre-existing know how: Each Contractor is and remains the sole owner of its IPR over its pre-existing know-how. The Contractors will identify and list the pre-existing know-how over which they may grant access rights for the project. The Contractors agree that the access rights to the pre-existing know-how needed for carrying out their own work under the project shall be granted on a royalty-free basis.
* Ownership and protection of knowledge: The ownership of the knowledge developed within the project will be governed by an open source license.
* Open data: Data and results obtained during the project that are based on open publicsector data will be made available free of charge.
The procedures for the dissemination, protection and exploitation of
intellectual property rights (IPR) are clearly covered in the Consortium
Agreement (in Section 6: Governance Structure, Sub-section 6.2.4: Veto
rights). The intention has been to balance the requirements necessary to
protect such intellectual property and the foreseen dissemination objectives.
IPR will be applied according to the rules of the employer under the
applicable European and national laws and regulations.
# 4 Update of the data management plan
During the project review of SUPREMA (Meeting on 4th March 2019) the Data
Management Plan (D4.3) is qualified as generic. This makes sense as an initial
version (submitted June 2018, M6 of SUPREMA), and this version could now be
updated with the initial set of variables that will be released open access.
Table 4, Table 5 and Table 6 presents the current scenario of data released in
Open Access.
Table 4 maps the AGMEMOD and MITERA models and set of indicators (existing and
new) that could be released open access. When an indicator is captured by a
model it is highlighted in green (e.g. AGMEMOD, MITERRA), and if it is not
captured it is highlighted in red (e.g. AGMEMOD, MITERRA).
Table 4.5 Mapping AGEMEMOD and MITERRA to SUPREMA variables, indicators and
sectors
<table>
<tr>
<th>
Variables
</th>
<th>
Unit
</th>
<th>
Captured by models (yes, no)
</th>
<th>
Comments
</th> </tr>
<tr>
<td>
Prices and (farm) Income variables
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Real producer price/input price
</td>
<td>
USD/t
</td>
<td>
AGMEMOD; MITERRA
</td>
<td>
Crops and animal products; member states (MS) level
</td> </tr>
<tr>
<td>
Real export price
</td>
<td>
USD/t
</td>
<td>
AGMEMOD; MITERRA
</td>
<td>
</td> </tr>
<tr>
<td>
Livestock input costs (incl feed costs, other costs)
</td>
<td>
euro/ton
</td>
<td>
AGMEMOD; MITERRA
</td>
<td>
</td> </tr>
<tr>
<td>
Farm sector income (gross income: sector returns /- intermediate costs)
</td>
<td>
th euro/farm
</td>
<td>
AGMEMOD; MITERRA
</td>
<td>
</td> </tr>
<tr>
<td>
Area and yield variables
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Area harvested
</td>
<td>
1000 ha
</td>
<td>
AGMEMOD; MITERRA
</td>
<td>
Crops; MS
</td> </tr>
<tr>
<td>
Land cover
</td>
<td>
1000 ha
</td>
<td>
AGMEMOD; MITERRA
</td>
<td>
Total land; MS
</td> </tr>
<tr>
<td>
Crop yield
</td>
<td>
dm t/ha, fm t/ha
</td>
<td>
AGMEMOD; MITERRA
</td>
<td>
Crops; MS
</td> </tr>
<tr>
<td>
Exogenous crop yield
</td>
<td>
dm t/ha, fm t/ha
</td>
<td>
AGMEMOD; MITERRA
</td>
<td>
Crops; MS
</td> </tr>
<tr>
<td>
Livestock yield (endogenous)
</td>
<td>
kg prt/ha
</td>
<td>
AGMEMOD; MITERRA
</td>
<td>
Milk, meat; MS
</td> </tr>
<tr>
<td>
Exogenous livestock yield trend
</td>
<td>
kg prt/ha
</td>
<td>
AGMEMOD; MITERRA
</td>
<td>
Milk, meat; MS
</td> </tr>
<tr>
<td>
Feed conversion efficiency (endogenous)
</td>
<td>
kg prt/kg prt
</td>
<td>
AGMEMOD; MITERRA
</td>
<td>
Ruminants; MS
</td> </tr>
<tr>
<td>
Feed conversion efficiency trend
</td>
<td>
kg prt/kg prt
</td>
<td>
AGMEMOD; MITERRA
</td>
<td>
</td> </tr>
<tr>
<td>
Market variables
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Food use
</td>
<td>
1000 t
</td>
<td>
AGMEMOD; MITERRA
</td>
<td>
Crops and animal products; MS
</td> </tr>
<tr>
<td>
Feed use
</td>
<td>
1000 t
</td>
<td>
AGMEMOD; MITERRA
</td>
<td>
Crops, MS
</td> </tr>
<tr>
<td>
Feed use
</td>
<td>
1000 t prt
</td>
<td>
AGMEMOD; MITERRA
</td>
<td>
Crops, MS
</td> </tr>
<tr>
<td>
Other use (seed /industrial use, losses)
</td>
<td>
1000 t
</td>
<td>
AGMEMOD; MITERRA
</td>
<td>
Crops, MS
</td> </tr>
<tr>
<td>
Imports
</td>
<td>
1000 t
</td>
<td>
AGMEMOD; MITERRA
</td>
<td>
Crops and animal products; MS
</td> </tr>
<tr>
<td>
Exports
</td>
<td>
1000 t
</td>
<td>
AGMEMOD; MITERRA
</td>
<td>
Crops and animal products; MS
</td> </tr>
<tr>
<td>
Production
</td>
<td>
1000 t
</td>
<td>
AGMEMOD; MITERRA
</td>
<td>
Crops and animal products; MS
</td> </tr>
<tr>
<td>
Domestic use (total use = food+feed+other
</td>
<td>
1000 t
</td>
<td>
AGMEMOD; MITERRA
</td>
<td>
Crops and animal products; MS
</td> </tr>
<tr>
<td>
Net trade
</td>
<td>
1000 t
</td>
<td>
AGMEMOD; MITERRA
</td>
<td>
Crops and animal products; MS
</td> </tr>
<tr>
<td>
Feed use ruminant meat
</td>
<td>
1000 t
</td>
<td>
AGMEMOD; MITERRA
</td>
<td>
Animals; MS
</td> </tr>
<tr>
<td>
Feed use ruminant meat
</td>
<td>
1000 t prt
</td>
<td>
AGMEMOD; MITERRA
</td>
<td>
Animals; MS
</td> </tr>
<tr>
<td>
Feed use dairy
</td>
<td>
1000 t
</td>
<td>
AGMEMOD; MITERRA
</td>
<td>
Animals; MS
</td> </tr>
<tr>
<td>
Feed use dairy
</td>
<td>
1000 t prt
</td>
<td>
AGMEMOD; MITERRA
</td>
<td>
Animals; MS
</td> </tr>
<tr>
<td>
Feed fish sector
</td>
<td>
1000 t
</td>
<td>
AGMEMOD; MITERRA
</td>
<td>
Fish families; MS
</td> </tr>
<tr>
<td>
Seed use
</td>
<td>
1000 t
</td>
<td>
AGMEMOD; MITERRA
</td>
<td>
Crops, MS
</td> </tr>
<tr>
<td>
Other industrial use
</td>
<td>
1000 t
</td>
<td>
AGMEMOD; MITERRA
</td>
<td>
Crops, MS
</td> </tr>
<tr>
<td>
Biodiesel/bioethanol for industrial use
</td>
<td>
1000 t
</td>
<td>
AGMEMOD; MITERRA
</td>
<td>
Crops, MS
</td> </tr>
<tr>
<td>
Stocks
</td>
<td>
1000 t/1000 h
</td>
<td>
AGMEMOD; MITERRA
</td>
<td>
Crops, MS
</td> </tr> </table>
<table>
<tr>
<th>
Slaughterings
</th>
<th>
1000 h
</th>
<th>
AGMEMOD; MITERRA
</th>
<th>
Animals; MS
</th> </tr>
<tr>
<td>
Self-sufficiency rate (production/domestic use)
</td>
<td>
Ratio
</td>
<td>
AGMEMOD; MITERRA
</td>
<td>
Crops and animal products; MS
</td> </tr>
<tr>
<td>
Food security _(need for good definition)_
</td>
<td>
_Index_
</td>
<td>
AGMEMOD; MITERRA
</td>
<td>
</td> </tr>
<tr>
<td>
Environmental variables
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Fertiliser N
</td>
<td>
1000 t
</td>
<td>
AGMEMOD; MITERRA
</td>
<td>
Crops and animals, MS
</td> </tr>
<tr>
<td>
Water for irrigation
</td>
<td>
km3
</td>
<td>
AGMEMOD; MITERRA
</td>
<td>
Crops and animals, MS
</td> </tr>
<tr>
<td>
Total GHG emissions
</td>
<td>
MtCO2e
</td>
<td>
AGMEMOD; MITERRA
</td>
<td>
Crops and animals, MS
</td> </tr>
<tr>
<td>
Total CO2 emissions
</td>
<td>
MtCO2e
</td>
<td>
AGMEMOD; MITERRA
</td>
<td>
Crops and animals, MS
</td> </tr>
<tr>
<td>
Total CH4 emissions
</td>
<td>
MtCO2e
</td>
<td>
AGMEMOD; MITERRA
</td>
<td>
Crops and animals, MS
</td> </tr>
<tr>
<td>
Total N2O emissions
</td>
<td>
MtCO2e
</td>
<td>
AGMEMOD; MITERRA
</td>
<td>
Crops and animals, MS
</td> </tr>
<tr>
<td>
Total NH3 emissions
</td>
<td>
1000 kg N
</td>
<td>
AGMEMOD; MITERRA
</td>
<td>
Crops and animals, MS
</td> </tr>
<tr>
<td>
Total N leaching and runoff
</td>
<td>
1000 kg N
</td>
<td>
AGMEMOD; MITERRA
</td>
<td>
Crops and animals, MS
</td> </tr>
<tr>
<td>
NO3 concentration groundwater
</td>
<td>
mg NO3/liter
</td>
<td>
AGMEMOD; MITERRA
</td>
<td>
Crops and animals, MS
</td> </tr>
<tr>
<td>
Soil organic carbon balance
</td>
<td>
kg C/ha/year
</td>
<td>
AGMEMOD; MITERRA
</td>
<td>
Crops and animals, MS
</td> </tr>
<tr>
<td>
Biodiversity change
</td>
<td>
% ch/yr
</td>
<td>
AGMEMOD; MITERRA
</td>
<td>
Total crops; MS
</td> </tr>
<tr>
<td>
Soil erosion _(need for good definition)_
</td>
<td>
Index
</td>
<td>
AGMEMOD; MITERRA
</td>
<td>
Total crops; MS
</td> </tr>
<tr>
<td>
Technological innovation variables
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Technical mitigation options – Production
</td>
<td>
1000 t
</td>
<td>
AGMEMOD; MITERRA
</td>
<td>
Crops and animals, MS
</td> </tr>
<tr>
<td>
Technical mitigation options – Emissions
</td>
<td>
MtCO2e
</td>
<td>
AGMEMOD; MITERRA
</td>
<td>
Crops and animals, MS
</td> </tr>
<tr>
<td>
Technical mitigation options - CO2
</td>
<td>
MtCO2e
</td>
<td>
AGMEMOD; MITERRA
</td>
<td>
Crops and animals, MS
</td> </tr>
<tr>
<td>
Technical mitigation options - CH4
</td>
<td>
MtCO2e
</td>
<td>
AGMEMOD; MITERRA
</td>
<td>
Crops and animals, MS
</td> </tr>
<tr>
<td>
Technical mitigation options - N2O
</td>
<td>
MtCO2e
</td>
<td>
AGMEMOD; MITERRA
</td>
<td>
Crops and animals, MS
</td> </tr>
<tr>
<td>
Internet of things/digitalisation _(need for good definition)_
</td>
<td>
_Index_
</td>
<td>
AGMEMOD; MITERRA
</td>
<td>
</td> </tr>
<tr>
<td>
Nitrification inhibitors _(need for good definition)_
</td>
<td>
_Index_
</td>
<td>
AGMEMOD; MITERRA
</td>
<td>
Crops and animals, MS
</td> </tr>
<tr>
<td>
Precision/smart farming _(need for good definition)_
</td>
<td>
_Index_
</td>
<td>
AGMEMOD; MITERRA
</td>
<td>
Crops and animals, MS
</td> </tr>
<tr>
<td>
Macroeconomic variables
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Total population
</td>
<td>
Million
</td>
<td>
AGMEMOD; MITERRA
</td>
<td>
Total country; MS
</td> </tr>
<tr>
<td>
Total GDP (MER)
</td>
<td>
bn USD 2005 MER
</td>
<td>
AGMEMOD; MITERRA
</td>
<td>
Total country; MS
</td> </tr>
<tr>
<td>
World prices
</td>
<td>
usd/1000 kg
</td>
<td>
AGMEMOD; MITERRA
</td>
<td>
Crops and animals, MS
</td> </tr>
<tr>
<td>
GDP deflator (national inflation rate in yr t, compared to base year)
</td>
<td>
index (2015=100)
</td>
<td>
AGMEMOD; MITERRA
</td>
<td>
Total country; MS
</td> </tr>
<tr>
<td>
Exchange rate
</td>
<td>
euro/dollar
</td>
<td>
AGMEMOD; MITERRA
</td>
<td>
Total country; MS
</td> </tr>
<tr>
<td>
CAP policy variables
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Ecological focus area
</td>
<td>
%
</td>
<td>
AGMEMOD; MITERRA
</td>
<td>
Total country; MS
</td> </tr>
<tr>
<td>
Budgetary national envelope
</td>
<td>
thsd euro
</td>
<td>
AGMEMOD; MITERRA
</td>
<td>
Total country; MS
</td> </tr>
<tr>
<td>
Voluntary coupled payments
</td>
<td>
thsd euro
</td>
<td>
AGMEMOD; MITERRA
</td>
<td>
Crops and animals, MS
</td> </tr>
<tr>
<td>
Young farmers payments
</td>
<td>
thsd euro
</td>
<td>
AGMEMOD; MITERRA
</td>
<td>
Crops and animals, MS
</td> </tr>
<tr>
<td>
Greening payments
</td>
<td>
thsd euro
</td>
<td>
AGMEMOD; MITERRA
</td>
<td>
Crops and animals, MS
</td> </tr>
<tr>
<td>
Price support (from envelope)
</td>
<td>
euro/100 kg
</td>
<td>
AGMEMOD; MITERRA
</td>
<td>
Crops and animals, MS
</td> </tr>
<tr>
<td>
Environmental policy variables
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Carbon tax level
</td>
<td>
USD/tCO2e
</td>
<td>
AGMEMOD; MITERRA
</td>
<td>
</td> </tr>
<tr>
<td>
Climate policy targets
</td>
<td>
MtCO2e
</td>
<td>
AGMEMOD; MITERRA
</td>
<td>
Total country; MS
</td> </tr>
<tr>
<td>
Trade policy variables
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Tariff rate quotas
</td>
<td>
_1000 t_
</td>
<td>
AGMEMOD; MITERRA
</td>
<td>
Crops and animals, MS
</td> </tr>
<tr>
<td>
Consumer preference variables
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Qualitative variables
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Soil quality _(need for good definition)_
</td>
<td>
Index
</td>
<td>
AGMEMOD; MITERRA
</td>
<td>
Crops; MS
</td> </tr>
<tr>
<td>
Water quality _(need for good definition)_
</td>
<td>
Index
</td>
<td>
AGMEMOD; MITERRA
</td>
<td>
Total country; MS
</td> </tr> </table>
Table 5 maps the MAGNET, AGMEMOD and GLOBIOM models and set of indicators
(existing and new) that could be released open access. When an indicator is
captured by a model it is highlighted in green (e.g. MAGNET, GLOBIOM, CAPRI),
and if it is not captured it is highlighted in red (e.g. MAGNET, GLOBIOM,
CAPRI) (Table
5).
Table 5 Mapping AGEMEMOD and GLOBIOM to SUPREMA variables, indicators and
sectors
<table>
<tr>
<th>
Variables
</th>
<th>
Unit
</th>
<th>
Captured by models (yes, no)
</th>
<th>
Comments
</th> </tr>
<tr>
<td>
Prices and (farm) Income variables
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Real producer price/input price
</td>
<td>
USD/t
</td>
<td>
MAGNET; GLOBIOM;CAPRI
</td>
<td>
Crops and animal products; member states (MS) level
</td> </tr>
<tr>
<td>
Real export price
</td>
<td>
USD/t
</td>
<td>
MAGNET; GLOBIOM; CAPRI
</td>
<td>
</td> </tr>
<tr>
<td>
Livestock input costs (incl feed costs, other costs)
</td>
<td>
euro/ton
</td>
<td>
MAGNET; GLOBIOM;CAPRI
</td>
<td>
</td> </tr>
<tr>
<td>
Farm sector income (gross income: sector returns -/- intermediate costs)
</td>
<td>
th euro/farm
</td>
<td>
MAGNET; GLOBIOM; CAPRI
</td>
<td>
</td> </tr>
<tr>
<td>
Area and yield variables
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Area harvested
</td>
<td>
1000 ha
</td>
<td>
MAGNET; GLOBIOM;CAPRI
</td>
<td>
Crops; MS
</td> </tr>
<tr>
<td>
Area harvested – rainfed
</td>
<td>
1000 ha
</td>
<td>
MAGNET; GLOBIOM; CAPRI
</td>
<td>
</td> </tr>
<tr>
<td>
Area harvested – irrigated
</td>
<td>
1000 ha
</td>
<td>
MAGNET; GLOBIOM; CAPRI
</td>
<td>
</td> </tr>
<tr>
<td>
Land cover
</td>
<td>
1000 ha
</td>
<td>
MAGNET; GLOBIOM;CAPRI
</td>
<td>
Total land; MS
</td> </tr>
<tr>
<td>
Crop yield
</td>
<td>
dm t/ha, fm t/ha
</td>
<td>
MAGNET; GLOBIOM;CAPRI
</td>
<td>
Crops; MS
</td> </tr>
<tr>
<td>
Crop yield – rainfed
</td>
<td>
dm t/ha, fm t/ha
</td>
<td>
MAGNET; GLOBIOM; CAPRI
</td>
<td>
</td> </tr>
<tr>
<td>
Crop yield – irrigated
</td>
<td>
dm t/ha, fm t/ha
</td>
<td>
MAGNET; GLOBIOM; CAPRI
</td>
<td>
</td> </tr>
<tr>
<td>
Exogenous crop yield
</td>
<td>
dm t/ha, fm t/ha
</td>
<td>
MAGNET; GLOBIOM; CAPRI
</td>
<td>
Crops; MS
</td> </tr>
<tr>
<td>
Climate change shifter on crop yield
</td>
<td>
%
</td>
<td>
MAGNET; GLOBIOM; CAPRI
</td>
<td>
</td> </tr>
<tr>
<td>
Livestock yield (endogenous)
</td>
<td>
kg prt/ha
</td>
<td>
MAGNET; GLOBIOM;CAPRI
</td>
<td>
Milk, meat; MS
</td> </tr>
<tr>
<td>
Exogenous livestock yield trend
</td>
<td>
kg prt/ha
</td>
<td>
MAGNET; GLOBIOM;CAPRI
</td>
<td>
Milk, meat; MS
</td> </tr>
<tr>
<td>
Feed conversion efficiency (endogenous)
</td>
<td>
kg prt/kg prt
</td>
<td>
MAGNET; GLOBIOM; CAPRI
</td>
<td>
Ruminants; MS
</td> </tr>
<tr>
<td>
Feed conversion efficiency trend
</td>
<td>
kg prt/kg prt
</td>
<td>
MAGNET; GLOBIOM;CAPRI
</td>
<td>
</td> </tr>
<tr>
<td>
Market variables
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Food use
</td>
<td>
1000 t
</td>
<td>
MAGNET; GLOBIOM;CAPRI
</td>
<td>
Crops and animal products;
MS
</td> </tr>
<tr>
<td>
Feed use
</td>
<td>
1000 t
</td>
<td>
MAGNET; GLOBIOM;CAPRI
</td>
<td>
Crops, MS; MAGNET in usd
</td> </tr>
<tr>
<td>
Feed use
</td>
<td>
1000 t prt
</td>
<td>
MAGNET; GLOBIOM;CAPRI
</td>
<td>
Crops, MS; MAGNET in usd
</td> </tr>
<tr>
<td>
Other use (seed /industrial use, losses)
</td>
<td>
1000 t
</td>
<td>
MAGNET; GLOBIOM;CAPRI
</td>
<td>
Crops, MS; MAGNET in usd
</td> </tr>
<tr>
<td>
Imports
</td>
<td>
1000 t
</td>
<td>
MAGNET; GLOBIOM;CAPRI
</td>
<td>
Crops and animal products; MS; MAGNET in usd
</td> </tr>
<tr>
<td>
Exports
</td>
<td>
1000 t
</td>
<td>
MAGNET; GLOBIOM;CAPRI
</td>
<td>
Crops and animal products; MS; MAGNET in usd
</td> </tr>
<tr>
<td>
Production
</td>
<td>
1000 t
</td>
<td>
MAGNET; GLOBIOM;CAPRI
</td>
<td>
Crops and animal products; MS; MAGNET in usd
</td> </tr>
<tr>
<td>
Domestic use (total use = food+feed+other
</td>
<td>
1000 t
</td>
<td>
MAGNET; GLOBIOM;CAPRI
</td>
<td>
Crops and animal products; MS; MAGNET in usd
</td> </tr>
<tr>
<td>
Net trade
</td>
<td>
1000 t
</td>
<td>
MAGNET; GLOBIOM;CAPRI
</td>
<td>
Crops and animal products; MS; MAGNET in usd
</td> </tr>
<tr>
<td>
Feed use ruminant meat
</td>
<td>
1000 t
</td>
<td>
MAGNET; GLOBIOM;CAPRI
</td>
<td>
Animals; MS; MAGNET in usd
</td> </tr>
<tr>
<td>
Feed use ruminant meat
</td>
<td>
1000 t prt
</td>
<td>
MAGNET; GLOBIOM; CAPRI
</td>
<td>
Animals; MS; MAGNET in usd
</td> </tr>
<tr>
<td>
Feed use dairy
</td>
<td>
1000 t
</td>
<td>
MAGNET; GLOBIOM;CAPRI
</td>
<td>
Animals; MS; MAGNET in usd
</td> </tr>
<tr>
<td>
Feed use dairy
</td>
<td>
1000 t prt
</td>
<td>
MAGNET; GLOBIOM; CAPRI
</td>
<td>
Animals; MS; MAGNET in usd
</td> </tr>
<tr>
<td>
Feed fish sector
</td>
<td>
1000 t
</td>
<td>
MAGNET; GLOBIOM; CAPRI
</td>
<td>
MAGNET in usd
</td> </tr>
<tr>
<td>
Feed fish sector
</td>
<td>
1000 t prt
</td>
<td>
MAGNET; GLOBIOM; CAPRI
</td>
<td>
MAGNET in usd
</td> </tr> </table>
<table>
<tr>
<th>
Seed use
</th>
<th>
1000 t
</th>
<th>
MAGNET; GLOBIOM; CAPRI
</th>
<th>
Crops, MS; MAGNET in usd
</th> </tr>
<tr>
<td>
Other industrial use
</td>
<td>
1000 t
</td>
<td>
MAGNET; GLOBIOM; CAPRI
</td>
<td>
Crops, MS; MAGNET in usd
</td> </tr>
<tr>
<td>
Biodiesel/bioethanol for industrial use
</td>
<td>
1000 t
</td>
<td>
MAGNET; GLOBIOM; CAPRI
</td>
<td>
Crops, MS; MAGNET in usd
</td> </tr>
<tr>
<td>
Stocks
</td>
<td>
1000 t/1000 h
</td>
<td>
MAGNET; GLOBIOM; CAPRI
</td>
<td>
Crops, MS; MAGNET in usd
</td> </tr>
<tr>
<td>
Slaughtering
</td>
<td>
1000 h
</td>
<td>
MAGNET; GLOBIOM; CAPRI
</td>
<td>
Animals; MS; MAGNET in usd
</td> </tr>
<tr>
<td>
Self-sufficiency rate (production/domestic use)
</td>
<td>
Ratio
</td>
<td>
MAGNET; GLOBIOM; CAPRI
</td>
<td>
Crops and animal products;
MS
</td> </tr>
<tr>
<td>
Food security _(need for good definition)_
</td>
<td>
_Index_
</td>
<td>
MAGNET; GLOBIOM;CAPRI
</td>
<td>
MAGNET in usd
</td> </tr>
<tr>
<td>
Environmental variables
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Fertiliser N
</td>
<td>
1000 t
</td>
<td>
MAGNET; GLOBIOM;CAPRI
</td>
<td>
Crops and animals, MS
</td> </tr>
<tr>
<td>
Water for irrigation
</td>
<td>
km3
</td>
<td>
MAGNET; GLOBIOM;CAPRI
</td>
<td>
Crops and animals, MS
</td> </tr>
<tr>
<td>
Total GHG emissions
</td>
<td>
MtCO2e
</td>
<td>
MAGNET; GLOBIOM;CAPRI
</td>
<td>
Crops and animals, MS
</td> </tr>
<tr>
<td>
Total CO2 emissions
</td>
<td>
MtCO2e
</td>
<td>
MAGNET; GLOBIOM;CAPRI
</td>
<td>
Crops and animals, MS
</td> </tr>
<tr>
<td>
Total CH4 emissions
</td>
<td>
MtCO2e
</td>
<td>
MAGNET; GLOBIOM;CAPRI
</td>
<td>
Crops and animals, MS
</td> </tr>
<tr>
<td>
Total N2O emissions
</td>
<td>
MtCO2e
</td>
<td>
MAGNET; GLOBIOM;CAPRI
</td>
<td>
Crops and animals, MS
</td> </tr>
<tr>
<td>
Total NH3 emissions
</td>
<td>
1000 kg N
</td>
<td>
MAGNET; GLOBIOM;CAPRI
</td>
<td>
Crops and animals, MS
</td> </tr>
<tr>
<td>
Total N leaching and runoff
</td>
<td>
1000 kg N
</td>
<td>
MAGNET; GLOBIOM;CAPRI
</td>
<td>
Crops and animals, MS
</td> </tr>
<tr>
<td>
NO3 concentration groundwater
</td>
<td>
mg NO3/liter
</td>
<td>
MAGNET; GLOBIOM; CAPRI
</td>
<td>
Crops and animals, MS
</td> </tr>
<tr>
<td>
Soil organic carbon balance
</td>
<td>
kg C/ha/year
</td>
<td>
MAGNET; GLOBIOM;CAPRI
</td>
<td>
Crops and animals, MS
</td> </tr>
<tr>
<td>
Energy use
</td>
<td>
_PJ_
</td>
<td>
MAGNET; GLOBIOM; CAPRI
</td>
<td>
</td> </tr>
<tr>
<td>
Water use
</td>
<td>
_1000 m3_
</td>
<td>
MAGNET; GLOBIOM; CAPRI
</td>
<td>
</td> </tr>
<tr>
<td>
Weather volatility/climate change _(need for good definition)_
</td>
<td>
_Index_
</td>
<td>
MAGNET; GLOBIOM;CAPRI
</td>
<td>
</td> </tr>
<tr>
<td>
Technological innovation variables
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Technical mitigation options – Production
</td>
<td>
1000 t
</td>
<td>
MAGNET; GLOBIOM;CAPRI
</td>
<td>
Crops and animals, MS
</td> </tr>
<tr>
<td>
Technical mitigation options – Emissions
</td>
<td>
MtCO2e
</td>
<td>
MAGNET; GLOBIOM;CAPRI
</td>
<td>
Crops and animals, MS
</td> </tr>
<tr>
<td>
Technical mitigation options - CO2
</td>
<td>
MtCO2e
</td>
<td>
MAGNET; GLOBIOM;CAPRI
</td>
<td>
Crops and animals, MS
</td> </tr>
<tr>
<td>
Technical mitigation options - CH4
</td>
<td>
MtCO2e
</td>
<td>
MAGNET; GLOBIOM;CAPRI
</td>
<td>
Crops and animals, MS
</td> </tr>
<tr>
<td>
Technical mitigation options - N2O
</td>
<td>
MtCO2e
</td>
<td>
MAGNET; GLOBIOM;CAPRI
</td>
<td>
Crops and animals, MS
</td> </tr>
<tr>
<td>
Internet of things/digitalisation _(need for good definition)_
</td>
<td>
_Index_
</td>
<td>
MAGNET; GLOBIOM;CAPRI
</td>
<td>
</td> </tr>
<tr>
<td>
Nitrification inhibitors _(need for good definition)_
</td>
<td>
_Index_
</td>
<td>
MAGNET; GLOBIOM; CAPRI
</td>
<td>
Crops and animals, MS
</td> </tr>
<tr>
<td>
Precision/smart farming _(need for good definition)_
</td>
<td>
_Index_
</td>
<td>
MAGNET; GLOBIOM; CAPRI
</td>
<td>
Crops and animals, MS
</td> </tr>
<tr>
<td>
Macroeconomic variables
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Total population
</td>
<td>
Million
</td>
<td>
MAGNET; GLOBIOM; CAPRI
</td>
<td>
Total country; MS
</td> </tr>
<tr>
<td>
Total GDP (MER)
</td>
<td>
bn USD 2005 MER
</td>
<td>
MAGNET; GLOBIOM; CAPRI
</td>
<td>
Total country; MS
</td> </tr>
<tr>
<td>
World prices
</td>
<td>
usd/1000 kg
</td>
<td>
MAGNET; GLOBIOM; CAPRI
</td>
<td>
Crops and animals, MS
</td> </tr>
<tr>
<td>
GDP deflator (national inflation rate in yr t, compared to base year)
</td>
<td>
index (2015=100)
</td>
<td>
MAGNET; GLOBIOM; CAPRI
</td>
<td>
Total country; MS
</td> </tr>
<tr>
<td>
Exchange rate
</td>
<td>
euro/dollar
</td>
<td>
MAGNET; GLOBIOM; CAPRI
</td>
<td>
Total country; MS
</td> </tr>
<tr>
<td>
Labour productivity (labour units/turnover)
</td>
<td>
Lab units/turnover
</td>
<td>
MAGNET; GLOBIOM; CAPRI
</td>
<td>
</td> </tr>
<tr>
<td>
Employment
</td>
<td>
Million
</td>
<td>
MAGNET; GLOBIOM; CAPRI
</td>
<td>
</td> </tr>
<tr>
<td>
CAP policy variables
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Ecological focus area
</td>
<td>
%
</td>
<td>
MAGNET; GLOBIOM; CAPRI
</td>
<td>
Total country; MS
</td> </tr>
<tr>
<td>
Budgetary national envelope
</td>
<td>
thsd euro
</td>
<td>
MAGNET; GLOBIOM; CAPRI
</td>
<td>
Total country; MS
</td> </tr>
<tr>
<td>
Voluntary coupled payments
</td>
<td>
thsd euro
</td>
<td>
MAGNET; GLOBIOM;CAPRI
</td>
<td>
Crops and animals, MS
</td> </tr>
<tr>
<td>
Young farmers payments
</td>
<td>
thsd euro
</td>
<td>
MAGNET; GLOBIOM;CAPRI
</td>
<td>
Crops and animals, MS
</td> </tr>
<tr>
<td>
Greening payments
</td>
<td>
thsd euro
</td>
<td>
MAGNET; GLOBIOM;CAPRI
</td>
<td>
Crops and animals, MS
</td> </tr>
<tr>
<td>
Price support (from envelope)
</td>
<td>
euro/100 kg
</td>
<td>
MAGNET; GLOBIOM; CAPRI
</td>
<td>
Crops and animals, MS
</td> </tr>
<tr>
<td>
Environmental policy variables
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Carbon tax level
</td>
<td>
USD/tCO2e
</td>
<td>
MAGNET; GLOBIOM; CAPRI
</td>
<td>
</td> </tr>
<tr>
<td>
Climate policy targets
</td>
<td>
MtCO2e
</td>
<td>
MAGNET; GLOBIOM;CAPRI
</td>
<td>
Total country; MS
</td> </tr>
<tr>
<td>
Energy policy targets
</td>
<td>
_PJ_
</td>
<td>
MAGNET; GLOBIOM;CAPRI
</td>
<td>
</td> </tr>
<tr>
<td>
Positive/negative externalities
</td>
<td>
euro/100 kg
</td>
<td>
MAGNET; GLOBIOM;CAPRI
</td>
<td>
</td> </tr>
<tr>
<td>
Trade policy variables
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Tariff rate quotas
</td>
<td>
_1000 t_
</td>
<td>
MAGNET; GLOBIOM;CAPRI
</td>
<td>
Crops and animals, MS
</td> </tr>
<tr>
<td>
Consumer preference variables
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
p.c. calory availability
</td>
<td>
kcal/cap/d
</td>
<td>
MAGNET; GLOBIOM; CAPRI
</td>
<td>
</td> </tr>
<tr>
<td>
Regional food products
</td>
<td>
_% in total products_
</td>
<td>
MAGNET; GLOBIOM;CAPRI
</td>
<td>
</td> </tr>
<tr>
<td>
Qualitative variables
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Soil quality _(need for good definition)_
</td>
<td>
Index
</td>
<td>
MAGNET; GLOBIOM; CAPRI
</td>
<td>
Crops; MS
</td> </tr>
<tr>
<td>
Water quality _(need for good definition)_
</td>
<td>
Index
</td>
<td>
MAGNET; GLOBIOM;CAPRI
</td>
<td>
Total country; MS
</td> </tr> </table>
Table 6 maps the AGMEMOD and MAGNET models and set of indicators (existing and
new) that could be released open access. Table 6 maps the models. When an
indicator is captured by a model it is highlighted in green (e.g. AGMEMOD,
MAGNET), and if it is not captured it is highlighted in red (e.g. AGMEMOD,
MAGNET).
Table 6 Mapping AGEMEMOD and MAGNET to SUPREMA variables and indicators and
sectors
<table>
<tr>
<th>
Variables
</th>
<th>
Unit
</th>
<th>
Captured by models (yes, no)
</th>
<th>
Comments
</th> </tr>
<tr>
<td>
Prices and (farm) Income variables
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Real producer price/input price
</td>
<td>
USD/t
</td>
<td>
AGMEMOD; MAGNET
</td>
<td>
Crops and animal products; MS and world regional level;
</td> </tr>
<tr>
<td>
Real export price
</td>
<td>
USD/t
</td>
<td>
AGMEMOD; MAGNET
</td>
<td>
</td> </tr>
<tr>
<td>
Livestock input costs (incl feed costs, other costs)
</td>
<td>
euro/ton
</td>
<td>
AGMEMOD; MAGNET
</td>
<td>
</td> </tr>
<tr>
<td>
Farm sector income (gross income: sector returns /- intermediate costs)
</td>
<td>
th euro/farm
</td>
<td>
AGMEMOD; MAGNET
</td>
<td>
</td> </tr>
<tr>
<td>
Area and yield variables
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Area harvested
</td>
<td>
1000 ha
</td>
<td>
AGMEMOD; MAGNET
</td>
<td>
Crops; MS
</td> </tr>
<tr>
<td>
Land cover
</td>
<td>
1000 ha
</td>
<td>
AGMEMOD; MAGNET
</td>
<td>
Total land; MS
</td> </tr>
<tr>
<td>
Crop yield
</td>
<td>
dm t/ha, fm t/ha
</td>
<td>
AGMEMOD; MAGNET
</td>
<td>
Crops; MS
</td> </tr>
<tr>
<td>
Exogenous crop yield
</td>
<td>
dm t/ha, fm t/ha
</td>
<td>
AGMEMOD; MAGNET
</td>
<td>
Crops; MS
</td> </tr>
<tr>
<td>
Climate change shifter on crop yield
</td>
<td>
%
</td>
<td>
AGMEMOD; MAGNET
</td>
<td>
</td> </tr>
<tr>
<td>
Livestock yield (endogenous)
</td>
<td>
kg prt/ha
</td>
<td>
AGMEMOD; MAGNET
</td>
<td>
Milk, meat; MS
</td> </tr>
<tr>
<td>
Exogenous livestock yield trend
</td>
<td>
kg prt/ha
</td>
<td>
AGMEMOD; MAGNET
</td>
<td>
Milk, meat; MS
</td> </tr>
<tr>
<td>
Market variables
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Food use
</td>
<td>
1000 t
</td>
<td>
AGMEMOD; MAGNET
</td>
<td>
Crops and animal products; MS;
</td> </tr>
<tr>
<td>
Feed use
</td>
<td>
1000 t
</td>
<td>
AGMEMOD; MAGNET
</td>
<td>
Crops, MS; MAGNET in usd
</td> </tr>
<tr>
<td>
Other use (seed /industrial use, losses)
</td>
<td>
1000 t
</td>
<td>
AGMEMOD; MAGNET
</td>
<td>
Crops, MS; MAGNET in usd
</td> </tr>
<tr>
<td>
Imports
</td>
<td>
1000 t
</td>
<td>
AGMEMOD; MAGNET
</td>
<td>
Crops and animal products; MS; MAGNET in usd
</td> </tr>
<tr>
<td>
Exports
</td>
<td>
1000 t
</td>
<td>
AGMEMOD; MAGNET
</td>
<td>
Crops and animal products; MS; MAGNET in usd
</td> </tr>
<tr>
<td>
Production
</td>
<td>
1000 t
</td>
<td>
AGMEMOD; MAGNET
</td>
<td>
Crops and animal products; MS; MAGNET in usd
</td> </tr>
<tr>
<td>
Domestic use (total use = food+feed+other
</td>
<td>
1000 t
</td>
<td>
AGMEMOD; MAGNET
</td>
<td>
Crops and animal products; MS; MAGNET in usd
</td> </tr>
<tr>
<td>
Net trade
</td>
<td>
1000 t
</td>
<td>
AGMEMOD; MAGNET
</td>
<td>
Crops and animal products; MS; MAGNET in usd
</td> </tr>
<tr>
<td>
Feed use ruminant meat
</td>
<td>
1000 t
</td>
<td>
AGMEMOD; MAGNET
</td>
<td>
Crops and animal products; MS; MAGNET in usd
</td> </tr>
<tr>
<td>
Feed use dairy
</td>
<td>
1000 t
</td>
<td>
AGMEMOD; MAGNET
</td>
<td>
Crops and animal products; MS; MAGNET in usd
</td> </tr>
<tr>
<td>
Feed fish sector
</td>
<td>
1000 t
</td>
<td>
AGMEMOD; MAGNET
</td>
<td>
Crops and animal products; MS; MAGNET in usd
</td> </tr>
<tr>
<td>
Seed use
</td>
<td>
1000 t
</td>
<td>
AGMEMOD; MAGNET
</td>
<td>
Crops and animal products; MS; MAGNET in usd
</td> </tr>
<tr>
<td>
Other industrial use
</td>
<td>
1000 t
</td>
<td>
AGMEMOD; MAGNET
</td>
<td>
Crops and animal products; MS; MAGNET in usd
</td> </tr>
<tr>
<td>
Biodiesel/bioethanol for industrial use
</td>
<td>
1000 t
</td>
<td>
AGMEMOD; MAGNET
</td>
<td>
Crops and animal products; MS; MAGNET in usd
</td> </tr>
<tr>
<td>
Stocks
</td>
<td>
1000 t/1000 h
</td>
<td>
AGMEMOD; MAGNET
</td>
<td>
Crops and animal products; MS; MAGNET in usd
</td> </tr>
<tr>
<td>
Slaughtering
</td>
<td>
1000 h
</td>
<td>
AGMEMOD; MAGNET
</td>
<td>
Crops and animal products; MS; MAGNET in usd
</td> </tr>
<tr>
<td>
Self-sufficiency rate (production/domestic use)
</td>
<td>
Ratio
</td>
<td>
AGMEMOD; MAGNET
</td>
<td>
Crops and animal products; MS
</td> </tr>
<tr>
<td>
Environmental variables
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Total GHG emissions
</td>
<td>
MtCO2e
</td>
<td>
AGMEMOD; MAGNET
</td>
<td>
Crops and animals, MS
</td> </tr>
<tr>
<td>
Total CO2 emissions
</td>
<td>
MtCO2e
</td>
<td>
AGMEMOD; MAGNET
</td>
<td>
Crops and animals, MS
</td> </tr>
<tr>
<td>
Total CH4 emissions
</td>
<td>
MtCO2e
</td>
<td>
AGMEMOD; MAGNET
</td>
<td>
Crops and animals, MS
</td> </tr>
<tr>
<td>
Total N2O emissions
</td>
<td>
MtCO2e
</td>
<td>
AGMEMOD; MAGNET
</td>
<td>
Crops and animals, MS
</td> </tr>
<tr>
<td>
Total NH3 emissions
</td>
<td>
1000 kg N
</td>
<td>
AGMEMOD; MAGNET
</td>
<td>
Crops and animals, MS
</td> </tr>
<tr>
<td>
Macroeconomic variables
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Total population
</td>
<td>
Million
</td>
<td>
AGMEMOD; MAGNET
</td>
<td>
Total country; MS
</td> </tr>
<tr>
<td>
Total GDP (MER)
</td>
<td>
bn USD 2005 MER
</td>
<td>
AGMEMOD; MAGNET
</td>
<td>
Total country; MS
</td> </tr>
<tr>
<td>
World prices
</td>
<td>
usd/1000 kg
</td>
<td>
AGMEMOD; MAGNET
</td>
<td>
Crops and animals, MS
</td> </tr>
<tr>
<td>
GDP deflator (national inflation rate in yr t, compared to base year)
</td>
<td>
index (2015=100)
</td>
<td>
AGMEMOD; MAGNET
</td>
<td>
Total country; MS
</td> </tr>
<tr>
<td>
Exchange rate
</td>
<td>
euro/dollar
</td>
<td>
AGMEMOD; MAGNET
</td>
<td>
Total country; MS
</td> </tr>
<tr>
<td>
Labour productivity (labour units/turnover)
</td>
<td>
Lab
units/turnover
</td>
<td>
AGMEMOD; MAGNET
</td>
<td>
</td> </tr>
<tr>
<td>
Employment
</td>
<td>
Million
</td>
<td>
AGMEMOD; MAGNET
</td>
<td>
</td> </tr>
<tr>
<td>
CAP policy variables
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Ecological focus area
</td>
<td>
%
</td>
<td>
AGMEMOD; MAGNET
</td>
<td>
Total country; MS
</td> </tr>
<tr>
<td>
Budgetary national envelope
</td>
<td>
thsd euro
</td>
<td>
AGMEMOD; MAGNET
</td>
<td>
Total country; MS
</td> </tr>
<tr>
<td>
Voluntary coupled payments
</td>
<td>
thsd euro
</td>
<td>
AGMEMOD; MAGNET
</td>
<td>
Crops and animals, MS
</td> </tr>
<tr>
<td>
Young farmers payments
</td>
<td>
thsd euro
</td>
<td>
AGMEMOD; MAGNET
</td>
<td>
Crops and animals, MS
</td> </tr>
<tr>
<td>
Greening payments
</td>
<td>
thsd euro
</td>
<td>
AGMEMOD; MAGNET
</td>
<td>
Crops and animals, MS
</td> </tr>
<tr>
<td>
Price support (from envelope)
</td>
<td>
euro/100 kg
</td>
<td>
AGMEMOD; MAGNET
</td>
<td>
Crops and animals, MS
</td> </tr>
<tr>
<td>
Environmental policy variables
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Carbon tax level
</td>
<td>
USD/tCO2e
</td>
<td>
AGMEMOD; MAGNET
</td>
<td>
</td> </tr>
<tr>
<td>
Trade policy variables
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Tariff rate quotas
</td>
<td>
_1000 t_
</td>
<td>
AGMEMOD; MAGNET
</td>
<td>
Crops and animals, MS
</td> </tr>
<tr>
<td>
Non-tariff measures (nr of different measures imposed on agric products)
</td>
<td>
_number/agr product_
</td>
<td>
AGMEMOD; MAGNET
</td>
<td>
</td> </tr>
<tr>
<td>
Consumer preference variables
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
p.c. calory availability
</td>
<td>
kcal/cap/d
</td>
<td>
AGMEMOD; MAGNET
</td>
<td>
</td> </tr> </table>
# 5 Conclusions
The AGMIP reporting template has been accepted as a base for the economic
models and will ensure some minimal comparability of information provided
across the economic models. The current deliverable is a “plan”, therefore it
may be expected that the process initiated with the data comparisons and
presentation of selected data by the SUPREMA models will trigger some further
harmonisation and standardisation that will go beyond the AGMIP template.
DataM is the European Commission’s data portal of agro-economic modelling. It
contains the outcomes of research activities, and is operated by the Joint
Research Centre (JRC) of the European Commission. DataM, including the web
portal but also the Information System, will be used to release model runs
that are considered for open access release. A baseline comparison and
harmonization action will be addressed for all models represented in SUPREMA.
A medium-term (until 2030) assessment of European agricultural policy
alternatives will cover CAPRI, IFM-CAP and AGMEMOD-MITERRA Europe. Finally,
SUPREMA will also use different modelling tools for the long-term (until 2050)
assessment of climate change goals, using GLOBIOM and MAGNET as leading
models.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1378_RESOLVD_773715.md
|
**Executive Summary**
RESOLVD project aims to join the H2020 pilot on Open Research Data (ORD). The
consortium agreement reflects the common position of the consortium w.r.t.
data management plan (DMP), which follows the FAIR (Findable, Accessible,
Interoperable and Reusable) principles.
In accordance with the “Guidelines on FAIR on Data Management in Horizon 2020”
(Version 3.0, 26 July 2016), this deliverable details:
* What data the project will collect and generate;
* Whether, and how, this data will be exploited or shared and made accessible/open for verification and re-use;
* How this data will be curated and preserved.
This is the first version of the data management plan expected for M6. The
document will be updated over the course of the project, in every periodic
assessment or whenever significant changes arise to include new data sets and
new results sets.
**1\. Introduction**
### 1.1. Objectives
This data management plan aims to guarantee replicability and benchmarking of
validated results of the project; in particular, those presented in scientific
publications, addressing the following objectives that will be detailed in the
different sections:
* Identification of the types of data that RESOLVD will generate, including typology, origin, volume, formats and files.
* Definition of how the data will be organized and managed and documented guaranteeing the good quality of the data.
* Preparation of the storage strategy during the project execution and data preservation (repository)
* Definition of the project data policies, including issues related to intellectual property. Costs analysis regarding data preservation and storage.
### 1.2. Contributions of partners
<table>
<tr>
<th>
**Partner**
</th>
<th>
**Contribution**
</th> </tr>
<tr>
<td>
**UdG**
</td>
<td>
Owner and editor of the document. UdG has also contributed in the sections
referred to the institutional repository that is proposed for storing the
data.
</td> </tr>
<tr>
<td>
**UPC**
</td>
<td>
UPC has contributed to specify the data information regarding the data that
will be generated in the laboratory, at UPC.
</td> </tr>
<tr>
<td>
**EYPESA**
</td>
<td>
EYPESA has contributed to specify the information for the case of data
generated in the real low voltage grid collected by a SCADA system, through
the remote terminal units (RTU), aggregator concentrator links and the Data
Concentrator Units (DCU) and smart meter.
</td> </tr>
<tr>
<td>
**CS**
</td>
<td>
CS has contributed to report information related with possible data being
generated by distributed sensing elements to be deployed in the grid.
</td> </tr> </table>
### 1.3. Report structure
The preparation of this deliverable has consisted in answering the questions
requested in the Guidelines on FAIR on Data Management in Horizon 2020. Core
sections in D8.4 are organised as follows:
* Section 2: it describes in detail the summary of the data that will be generated/collected in the project, explaining the purpose of the data generation/collection, its relation with the objectives of the project, including also its origin, types and formats, etc.
* Section 3: it includes all the questions referred to make the data findable, accessible, interoperable and re-usable through the proposed repository.
* Section 4: it explains the allocation of resources and cost foreseen for data preservation.
* Section 5: it addresses data recovery as well as secure storage and transfer of sensitive data.
* Section 6: it will consider the ethical issues taking into account regarding sensitive
information, if it’s the case.
**2\. Data Summary**
### 2.1. State the purpose of the data collection/generation
The data being used in the RESOLVD project will be oriented to improve
knowledge on how power flow behaves in the low voltage in presence of
distributed renewable generation and high variability on demand. The general
purpose of the project RESOLVD is to act (schedule and control) on the low
voltage grid in order to increase efficiency. With this aim, data will serve
in the following purposes:
1. Enhance grid observability when monitoring: improve knowledge on demand/generation profiles, power flow computation, etc.
2. Modelling demand and generation for forecasting purposes: training of machine learning algorithms to forecast demand and generation in specific points of the grid.
3. Test and performance evaluation of both, technologies developed as part of the RESOLVD solution, and computation of KPIs during project validation: validation of proposed solution and quantification of improvements based on indicators.
### 2.2. Explain the relation to the objectives of the project
The overall objective of RESOLVD is to improve efficiency and the hosting
capacity of distribution networks in a context of highly distributed renewable
generation by introducing energy flexibility and control by acting on the
grid. The following document describes dependencies of data being collected
and generated with specific objectives derived from this primary one:
* Design, develop and test new hardware for monitoring and acting on the grid: These new devices will acquire physical measures of power and energy that will be used for monitoring and control.
* Resilient and efficient scheduling and operation of the LV grid: data from specific point of the grid will be used for modelling demand and generation and further forecasting.
* Analyse potential business models: data from different sources will be used to compute KPIs to be used in cost-efficiency analysis of different business models.
### 2.3. Specify the types and formats of data generated/collected
The type of data collected within the project is experimental coming from both
laboratory and a real scenario provided by UPC and EYPESA respectively. In the
case of data generated in the laboratory, at UPC premises, data refers to time
series of active and reactive power exchanged by the power electronics device
(energy router, ER and the state of batteries (level of charge, and voltage
level). Data is acquired through a test platform where the devices are
attached to.
In the case of data generated (see section 6 for ethical issues) in the real
low voltage grid provided by EYPESA, the data will be collected by a SCADA
system, through the remote terminal units (RTU) or / and aggregator
concentrator links connected to smart meters. They include electrical
variables such as active power, reactive, power, apparent power, voltage and
currents and will be delivered as tables in *.csv files extracted from SCADA
data base. The real environment will have also access to data collected by the
metering infrastructure through Data Concentrator Units (DCU) and smart meter.
This data set includes energy (active and reactive), voltage and/or current
acquired periodically and updated daily to the control center. This data is
associated to the metering point with an identifier known as Universal Code of
Supplier Point (Spanish acronym: CUPS) and the format should be in *.xml or
*.csv. It would be provided form the smart metering data base. Also data from
other instruments (PMUs and PQM) being used in the project can also be
included in the plan but at this moment nature and availability of them is
unknown.
### 2.4. Specify if existing data is being re-used (if any)
Weather data is also relevant for either electricity generation (PV
generation) and consumption forecasting. The _Servei Meteorològic de
Catalunya_ (Catalan Weather Service) has been contacted and it has already
provided hourly data from January 1st 2008 to December 31st 2017 of the solar
irradiation, temperature, wind speed and direction and humidity registered by
the automatic weather stations placed in villages of Gurb and Orís. These two
weather stations are the stations with irradiation measurements, which are
closer to the pilot of the project. Nevertheless, the data provided by the
Catalan Weather Service is not allowed to be open for the project.
The agency has provided the data exclusively for research purposes within the
project. Access rights are directly managed by the Catalan Weather Service.
In case of re-using other existing data (open Access or other) datasets during
the project execution, they will be included and described in further updates
of this data management plan. At the moment of submission of this document
(M6), it was not foreseen the use of other existing data.
### 2.5. Specify the origin of the data
Data described in subsection 2.3 are generated within the project. In the case
of data collected in the laboratory, it will be gathered by data acquisition
systems already installed in the laboratory. Measurements refer to electrical
magnitudes. In the case of real low voltage grid provided by EyPESA, the data
will be obtained from SCADA and the metering infrastructure through the Meter
Data Collector (MDC) systems. As mentioned other instruments as PMUs and PQM
could be deployed in the grid during the project.
Weather data is directly provided by the Catalan Weather Service.
### 2.6. State the expected size of the data (if known)
Data generated in the laboratory, at UPC, the expected size of the data does
not exceed few MB in volume in the format of Excel files.
Data from the real low voltage grid scenario, has two main sources with
different time resolution: Data coming from smart meters in the validation
area generate 75 kB every day with a granularity of 60 minutes. On the other
hand, data collected from the SCADA does not exceed 200 kB every day with a
granularity of 3 minutes. Representative data is supposed to cover one year
representing a volume of around 100MB (275kB x 365= 100375kB). PMUs and PQMs
can supply registers at higher sampling frequency but at this time details are
not available.
### 2.7. Outline the data utility: to whom will it be useful
The datasets generated in the project could be useful for those electricity
actors and stakeholders (DSO, aggregators, technology and R+D providers, etc.)
that have interest in the low voltage energy management and business models
involving LV network operation (distributed resources generation, energy
islands, aggregation, demand response, etc. ). Data will be also useful for
scientists to check theoretical results and test algorithms.
**3\. Fair data**
### 3.1. Making data findable, including provisions for metadata
**3.1.1. Outline the discoverability of data (metadata provision)**
The metadata standards proposed to describe the dataset will be the Dublin
Core and Datacite Schema, as they are a flexible and common used standards and
are also the ones adopted by the European Open AIRE repository.
**3.1.2. Outline the identifiability of data and refer to standard
identification mechanism. Do you make use of persistent and unique identifiers
such as Digital Object Identifiers?**
Data will be make open through AIRE compatible repositories. Identification of
data in such repositories is given by unique and persistent HANDLE. For
example, the UdG institutional repository “https://dugi-doc.udg.edu/”, assigns
a unique and persistent URL to access the document and dataset following the
format: _http://hdl.handle.net/10256/_ .
**3.1.3. Outline naming conventions used**
It has not been decided at M6 yet, but the project dataset identification will
likely follow a naming based on the following convention: Data_<WPno>_<serial
number of dataset>_<dataset title>. Example: Data_WP2_1_User generated
content.
**3.1.4. Outline the approach towards search keywords**
Data sets have to be findable easily, rapidly and identically. Therefore,
standard measures have to be used to identify the data sets. This can include
the definition and use of naming conventions, search keywords, version
numbers, metadata standards and standard data identifiers. With respect to
keywords, the following list is proposed at this stage of the project
execution: Energy systems (production, distribution, application), energy
collection, conversion and storage, renewable energy, low voltage grid
flexibility, advanced power electronics, storage management at grid level,
generation and demand forecasting, scheduling, self-healing, monitoring, PMU,
cybersecurity, energy business models.
Additionally, it's been agreed to follow the Smart Grid Architecture _Model_
(SGAM) to describe the different use cases proposed in the project in WP1, and
therefore, all variables naming will be based on this existing model when
covered by the model. Moreover, SCADA systems, and WAMS and PMUs, used by
EYPESA and CS respectively use tele-control standards IEC-60870-5-104 / EyPESA
Profile, 61850 for standards of substation/grid automation and IEC 61970
(Common Information model, CIM). In the next updates of this data management
plan, we will revise if all these standards do have or not, a list of keywords
and/or meta data.
**3.1.5. Outline the approach for clear versioning**
The repository will host the final data. It's not a working tool. Thus, there
will not be any versioning management for the data used in the project.
Moreover, due to the nature of the data (electrical variables such as active
power, reactive, power apparent power, voltage, currents, etc.), it is not
necessary to consider different versions of them.
**3.1.6. Specify standards for metadata creation (if any). If there are no
standards in your discipline describe what type of metadata will be created
and how**
Each file associated with data will be accompanied with unique specified
metadata in order to allow ease of access and re-usability. Standards such as
the Dublin Core and Datacite following the guidelines recommended by OpenAire.
Standards indicated in section 3.1.4 will be analysed to identify metadata
they’re using (or not).
### 3.2. Making data openly accessible
**3.2.1. Specify which data will be made openly available. If some data is
kept closed provide rationale for doing so**
In the case of data generated in the laboratory, there is no reason to
constrain the open access to the data included. However, the data collected in
the real low voltage will have two origins. First EyPESA, as owner of the grid
where the pilot will be deployed will provide data collected in its own
infrastructure (SCADA, AMI systems). Second instruments and technologies
developed in the project will produce other data. The second dataset will be
made open whereas the first will be supplied by the company to the consortium
under private and confidential conditions. When it is necessary to open this
first data, for example to illustrate transformation and dependencies with the
second set or to make publications replicable and more relevant, a specific
authorization from data owner will be managed.
**3.2.2. Specify how the data will be made available**
Laboratory data will be made available after publication of corresponding
research. Research could be included either in project deliverables or
articles. No embargo expected.
Company owned data (EyPESA) will follow two strategies depending on the
typology of data:
1. Data needed after publication of a contribution of research: will be opened without constraints.
2. Data needed to exchange information with the platform and stakeholders, without reproducing any specific research result being published: criteria explained in the section 3.2.1 of this document will be followed.
**3.2.3. Specify what methods or software tools are needed to access the data.
Is documentation about the software needed to access the data included. It’s
possible to include the relevant software (e.g. In open source code)**
Data from experiments, delivered as data tables, will be made available in
text files (‘csv’, ‘xml’, ‘json’) easily accessible with any text editor,
spreadsheet software or read commands available in any software environment.
When necessary a header or configuration file will be included to facilitate
the reading. It is also expected to use an API, specifically from smart meters
DB should be used the tool Rabbit MS in the case of asynchronous
communications. For the SCADA DB it is executed to exchange information
from/to folder using text files (‘csv’, ‘xml’, ‘json’) easily accessible as
mentioned before.
**3.2.4. Specify where the data and associated metadata, documentation and
code are deposited**
At the moment of this document delivery (M6), there’s not a final decision
about the repository where the data and publications will be deposited.
Universitat de Girona, coordinator of the project, has proposed its
institutional repository: _https://dugi-doc.udg.edu/_ as a candidate;
however, other repositories (zenodo) could also be used. It has to be
discussed to have an agreement on that. Final decision will be included in
future revisions of this document.
**3.2.5. Specify how access will be provided in case there are any
restrictions**
According to the articles 29.2 and 29.3 of the Grant Agreement, each
beneficiary must ensure open access (free of charge online access for any
user) to all peer-reviewed scientific publications relating to its results.
Depositing in a research data repository, the data, including associated
metadata, needed to validate the results presented in scientific publications
as soon as possible. Data may be used by third parties under proposed CCBY
license taking into account that these data will be used under data protection
law according to the agreements achieved whenever necessary.
### 3.3. 2.3 Making data interoperable
**3.3.1. Assess the interoperability of your data. Specify what data and
metadata vocabularies, standards or methodologies you will follow to
facilitate interoperability.**
In order to facilitate interoperability, we use OAI service, that allows to
serve items in XML format for harvesting metadata from other repositories.
OAI server allows to ask for records in different formats (OAI_DC, METS,DIDL,
DATACITE -We only allow this format for the openaire-data set).
Besides metadata, METS and DIDL formats offer file download uri and its
preservation metadata in order to check file integrity.
OAI_DC and DATACITE formats provide only metadata. The main problem of the
OAI_DC format, which is currently the most used standard, is that dublin core
metadata element qualifier is lost during harvesting. This may be very
confusing, that’s the reason why controlled vocabularies for some of these
metadata elements have been created.
These vocabularies have a root that provides information on which metadata the
values belong to. Specifically, they provide document type, version, access
rights and whether they belong to some research program.
For item language we use ISO 639-3 standard. The DATACITE format provides a
hierarchical structure that gives related information that adds value to other
servers harvesting us (Not specifically DSpace repositories, they don't need
to have a Dublin Core metadata system).
Finally, as stated in 3.1.4, Smart Grid Architecture _Model_ (SGAM) and
standards such as IEC60870-5-104 / 60870 and standards for substation/grid
automation IEC 61970 (Common Information model, CIM, 61850) will be checked in
the next updates of this data management plan to analyze if they could have
own vocabularies to facilitate interoperability.
### 3.4. 2.4 Increase data re-use (through clarifying licenses)
**3.4.1. Specify how the data will be licensed to permit the widest reuse
possible**
The documents and files associated to the dataset are proposed to be licensed
through an CCBY license. During the project execution, data will be internally
available for the consortium partners. It will also be findable and reusable
through the final depositing repository (the institutional at UdG has been
proposed at M6) and from OpenAire, the latest by the end of the project.
**3.4.2. Specify when the data will be made available for re-use. If
applicable, specify why and for what period a data embargo is needed**
The data will remain re-usable after the end of the project by anyone
interested in it, with no access or time restrictions.
**3.4.3. Specify whether the data produced and/or used in the project is
useable by third parties, in particular after the end of the project. If the
re-use of some data is restricted, explain why.**
Data may be used by third parties under proposed CCBY license taking into
account that these data will be used under data protection law according to
the agreements achieved whenever necessary.
**3.4.4. Describe data quality assurance processes**
Since data mining algorithms developed in WP3 will rely on the availability of
data specified in section 1, it must be standardized and organized for and
easy access and guaranteeing data quality. Application of data preprocessing
procedures for cleaning data (missing values, repair of abnormal values,
outliers avoided, etc…), and appropriate storage in files are proposed for
this purpose.
**3.4.5. Specify the length of time for which the data will remain re-usable**
The length of time for which the data will remain re-usable correspond to the
time period the repository system will be available, in accordance with the
project consortia.
**4\. Allocation of resources.**
### 4.1. Estimate the costs for making your data FAIR. Describe how you
intend to cover these costs
There are no costs associated to the described mechanisms to make the database
FAIR and long term preserved. This is a structural service in the University
of Girona not associated to extra costs in the projects the organization is
participating. The cost will be covered at the local hosting institute in the
context of RESOLVD, as part of the standard network system maintenance.
### 4.2. Clearly identify responsibilities for data management in your
project
The project coordinator, supported by data providers, has the ultimate
responsibility for the data management in the project. Moreover, the project
coordinator will be the liaison between the data owners and EC.
### 4.3. Describe costs and potential value of long term preservation
There are no additional costs associated to the described mechanisms to make
the database FAIR and long term preserved. For the case of the institutional
repository at UdG, this is a structural service in the University of Girona
not causing an extra costs in the projects in which the organization is
participating.
# Data security
## Address data recovery as well as secure storage and transfer of sensitive
data
We are differentiating the data being used during the project execution and
the final datasets that will be uploaded and made available at the
institutional repository at the end of the project.
While the project is running and during the implementation of work, google
drive will be used, so a copy of data is automatically performed. This data
will not include sensitive information, as is stated in deliverables included
in work package 9 (see section 6 in this document). They will be hosted in a
Windows 10 enterprise server.
Regarding the institutional repository, full copy is backed up 4 times a year
using corresponding exportation and backups systems. In addition, there’re
several periodical backup on demand, and also before and after main System and
applications updates.
# Ethical aspects to be covered in the context of the ethics review, ethics
section of DoA and ethics deliverables.
The ethical aspects are related to the use of personal data and are already
addressed in the project in the following deliverables:
* D9.1 (M1) to confirm that the ethical standards and guidelines of Horizon2020 are rigorously applied, regardless of the country in which the research is carried out.
* D9.2 (M1) to provide documents detailing secure data management procedures to guarantee ethics and privacy
* D9.3 (M3) to provide a document with EPESA authorization and conditions of use and manage the data.
* D9.4 4 (to be submitted in M19) detailing how ethics and privacy issues related to data management comply the EU legislation.
# Other issues
At Spanish level, Law 14/2011 of June 1st, on Science, Technology and
Innovation (Article 37 Dissemination in open access) is being considered for
data management procedures.
# Further support in developing your DMP
This DMP has been created with the support tool “Pla de Gestió de Dades de
Recerca”, _http://dmp.csuc.cat_
Research Data Management Plan is a development of Digital Curation Center
(DCC), adapted by the Consortium of Libraries Universitaries of Catalonia
(CSUC). It is based on the open source DMPRoadmap codebase. This institutions
work closely with research funders and universities to produce a tool that
generates active DMPs and caters for the whole lifecycle of a project, from
bid-preparation stage through to completion.
# Conclusions
This deliverable has provided details about the data management plan
envisioned within the RESOLVD project. This is the first version of the DMP
delivered after 6 months of project.
This DMP will be updated in parallel with periodic reports and project
management plans. The partners providing data (UPC, EYPESA and CS) and the
project coordinator serve as references for questions related to data
management in RESOLVD.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1382_IMPAQT_774109.md
|
**1\. IMPAQT DATA PROCEDURES**
This deliverable report the Data Management Plan (DMP) that define the main
elements of the IMPAQT data management policy for the data collected or
generated. Details how the data will be treated during the IMPAQT project
regarding all the procedure for managing the IMPAQT data collection, analysing
& processing, preservation and sharing.
### 1.1. Data Purpose
The main purpose for data gathered intends to meet the objectives of the
IMPAQT project in order to develop and validate in-situ a multi-purpose
(inland, coastal and offshore productions), multi-sensing (heterogeneous
sensors and new/emerging technologies) and multi-functional (advanced
monitoring, modelling, data analytics and decision making) management platform
for sustainable IMTA production. The specific objectives of IMPAQT project
are:
* Design and implement new/emerging efficient and cost-effective technologies to monitor and manage systems for IMTA production;
* Validate IMPAQT systems and IMTA model in-situ and the fish/seafood product in a laboratory;
* Demonstrate an optimal sustainable IMTA development from a holistic perspective based on ecosystem services and circular economy principles;
* Promote an effective transfer of knowledge consequent by IMPAQT activities to the EU aquaculture stakeholders.
The IMPAQT project will generate and collect several kinds of data mostly
originated from environmental data to be observed such as:
* Measurement data from diverse sources deployed in IMPAQT pilots;
* Full visualization of fish/shellfish and associated quality parameters at any given time;
* Water quality data needed for model development;
* Environmental, socioeconomic and cost-effectiveness analysis data (both qualitative and quantitative).
### 1.2. Data Principles
Participants of the IMPAQT consortium must follow this DMP when describing the
types of data that will be produced or collected, how the data will be
exploited and shared, the standards that will be applied, how the data will be
preserved allowing their security and how will it be provided all the
information, procedures, tools and instruments required for access,
extraction, exploitation and reproduction IMPAQT data. The data management is
in accordance with European Commission (EC) guidelines 1 and also with the
following data principles:
* Open Access;
* FAIR Data;
* IPR Management;
* Compliance with non-EU Partners.
### Open Access
The Open Access movement began in the 1990s when access to the internet became
available. Open Access means that scientific information must be publicly
available and free of charge so that everyone can read, download, copy,
distribute, print, search and consult without facing any financial, legal or
technical barriers. Making search results accessible to everyone provides
faster progress in innovation, improves the quality of results, inspires
partnerships involving and enhances society and the public and private
sectors.
This movement gained even more notoriety in a meeting in October 2003 that
brought together several international experts that aspired to develop a web-
based search environment using the Open Access paradigm. From this meeting
results the Berlin Declaration on Open Access to Knowledge in the Sciences and
Humanities 2 which is also based on the Budapest Open Access Initiative 3
and the subsequent annual conferences have increasingly sensitized other
entities to the accessibility of scientific information. This Declaration has
been signed by nearly 300 research institutions, libraries, archives, museums,
funding agencies, and governments from around the world.
In the research & development context, Open Access to Scientific Information
4 refers to two main categories:
#### Scientific Publications Research Data
The following Figure 1 presents the context of dissemination and exploitation
of Open Access Policy to scientific publications and to research data.
Figure 1 - Open access to scientific publication and research data
_Scientific Publications_
Scientific Publication means to peer-reviewed scientific research articles
that are firstly published in academic journals. Open Access to scientific
publications means free access for everyone including the rights to read, make
download, print and also right to copy, distribute, search, link, trace and
extract. Open Access does not imply an obligation to publish results since
this decision is entirely the responsibility of the partners and does not also
affect the decision to commercially exploit the results. Open access can only
become a problem if the publication is chosen as the means of dissemination.
The decision to publish (or not) through open access should only come after a
more general decision on whether to publish directly or to first seek IP
protection.
According with the Grant Agreement (GA) each IMPAQT partner must ensure Open
Access to scientific publications regarding the results and need to make all
the efforts to ensure right to copy, distribute, search and link thought
involving two steps:
* Deposit Publications
* Provide Open Access
_Deposit Publications_
A digital copy of the accepted document for publication needs to be deposit in
a repository of a scientific publication that must be in an accessible,
standardized or publicly accessible text file format. The publication must be
deposited before the publication deadline and can be deposited before (for
example, at the moment that the article is accepted by the journal).
When deposit publications, the research data that is required to authenticate
the results presented in publication (underlying data) must also be deposited
in a data repository.
The chosen repository should be an online archive, preferably an institutional
and centralized repository, that do not claim rights over the deposited
publications or block their access. The OpenAIRE 5 (Open Access
Infrastructure for Research in Europe) is a suggested entry point to choose
and select a repository.
_Provide Open Access_
Once the publication has been deposited, free access through the repository
must be guaranteed and one of two main methods for the Open Access to
Publications must be provided:
* **“Green” Open Access** – May deposit immediately the publication in a chosen repository, since that guarantees their Open Access during the embargo period as established in the Open Access Publishing Agreement 6 (usually between 6 and 12 months depending on the type of publication).
* **“Gold” Open Access** \- Can publish in open access journals, that sell signatures and offer the option of making individual articles openly accessible. These articles are eligible for reimbursement during the IMPAQT project and once the project is completed, they cannot be reimbursed by the IMPAQT project budget.
_Research Data_
Research data refers to data from underlying publications (underlying data)
and/or other data collected such as statistics, experiment results,
measurements, observations, and others. Open Access to Research Data is the
right to access and re-use data under the conditions established in the GA.
The access and re-use of research data generated in H2020 projects such IMPAQT
is facilitated through the flexible Open Research Data (ORD) Pilot 7 . The
ORD Pilot aims to make the research data available with few restrictions as
possible and also to protect sensitive data from possible inappropriate
accesses. Research data need to be monitored with the objective to develop the
policy regarding open science.
However not all research data need to be open under the ORD Pilot and during
the project lifetime, different accesses policy’s (public, restricted, closed,
private) to the data can be chosen, where it needs to involve two steps:
* **Deposit Research Data** \- Project research data (and underlying data) should be deposited whenever possible into an online data repository. It is advisable that the repository allows to deposit both publications and underlying data with tools to "link" them (such persistent identifiers and citations of data). OpenAIRE also provides supplementary information and support for linking publications with underlying research data.
* **Provide Access** –All the effort and procedures must be taken to enable third parties to access, exploit, reproduce and disseminate (for free) the data. To do so, it is necessary to attach licenses to the deposited research data.
The requirements for open access do not imply an obligation to publish all the
project results because this decision is entirely the responsibility of IMPAQT
partners. In addition, open access also does not affect the decision to
commercially exploit the results.
IMPAQT consortium believes in the concepts of Open Research and development
and the benefits that can draw by allowing the re-use of research data. For
that, the data produced in IMPAQT will be published whenever possible under
Open Access procedures.
### Explanatory note
The consortium is working on a list of all IMPAQT datasets, with detailed
information regarding its confidentiality, storage and availability
mechanisms. Each identified public IMPAQT dataset will have details on how it
can be accessed, and the restricted ones will have a clear explanation on why
they were considered restricted. This list is currently being updated and will
be made available online in 2 different versions: 1) IMPAQT Public datasets,
which will be publicly available online, and 2) IMPAQT All datasets, which
will be a confidential list, with all IMPAQT datasets, including public and
restricted ones. This 2 nd list will only be available to the consortium
partners and EC services.
The consortium has been working on a 1 st version of the IMPAQT deployments,
so information is currently being collected. The plan is that both versions of
the IMPAQT Datasets list are made available online during April 2020.
### FAIR Data
IMPAQT consortium is conscious of the directives for Open Access of
Publications and Research Data in the H2020 projects and in the participation
on Open Research Data Pilot. Making research data findable, accessible,
interoperable and re-usable (FAIR 8 ) is an integral part of the process of
open science and research.
Making research data FAIR enable both scientific research and society to
leverage the benefits of such data and also make a significant contribution to
economic growth. The FAIR principles, presented in Figure 2, are particularly
helpful since allow:
* support to knowledge discovery and innovation;
* data and knowledge integration support;
* allow sharing and data re-use;
* support data and metadata to be machine-readable;
* allow data discoveries through the harvest and analysis of multiple datasets.
Figure 2 - FAIR Data principles 9
When managing research data, these principles should be heeded to ensure that
the IMPAQT research data will be shared in a way that enables and enhances re-
use, by humans and machines. Annex 1 provide a set of questions that should be
considered in order to make IMPAQT research data FAIR.
_Findable_
IMPAQT research data must be easily discovered and locatable in order to be
used and re-used. Both IMPAQT data and metadata should be easy to find for
humans and computers. The use of machine-readable metadata is essential for
datasets to be discovered automatically. To be findable:
* IMPAQT research data and metadata must be given a unique, persistent, global identifier;
* IMPAQT research data should be described with well-founded metadata;
* IMPAQT research data and metadata must be registered/indexed in a searchable resource (repository);
* Metadata must specify the identifier for the IMPAQT research data.
_Accessible_
Must be provided all the necessary information to access the IMPAQT research
data and, if existing, all authentication/authorization methods must also be
provided. Data do not need to be necessarily open if there are good reasons
such as privacy concerns or commercial interests and there must be
transparency in the conditions of access and reuse. To be accessible:
* IMPAQT research data and metadata need to be recoverable through their identifier using a standardized communication protocol that needs to be open and free allowing, if required, authentication and authorization procedures;
* Metadata must be accessible, even when IMPAQT data is no longer available.
_Interoperable_
Interoperability allows the exchange of data between several entities such as
researchers, institutions and organizations. To be interoperable, IMPAQT
research data and metadata need to use community agreed formats and should be
also interoperable with applications or workflows for analysis, storage, and
processing. In order to be interoperable:
* IMPAQT research data and metadata must use a formal, accessible, shared, and widely applicable language for knowledge representation;
* IMPAQT research and metadata should have vocabularies that follow the FAIR principles;
* IMPAQT research data and metadata should include qualified references to other research data or metadata.
_Re-usable_
Optimize data re-use is the basic purpose of the FAIR data principles. IMPAQT
research data should maintain its initial richness and must be clearly
described. In order to be reusable:
* IMPAQT data and metadata must have a plurality of precise and relevant attributes;
* IMPAQT data and metadata must be released with a clear and accessible data usage license;
* IMPAQT data and metadata need to be associated with their origin;
* IMPAQT data and metadata must be aligned with the community standards and relevant to their domain.
### IPR Management
In a project like IMPAQT, it is fundamental to carry out property rights of
the data used and generated during the project. Since IMPAQT consortium will
produce research data, publications and underlying-data, all the Intellectual
Property Rights (IPR) must be safeguarded using explicit licenses to make them
openly accessible 10 :
* **Publications** – The copyrights must be safeguarded and appropriate licenses to publications should be granted. Creative Commons 11 licenses are recommended since offer useful licensing solutions in order to provide Open Access to third parties;
* **Research data** \- To make research data more openly accessible 12 , explicit licenses such as Creative Commons Attribution 4.0 (CC BY) or Creative Commons CCZero (CC0) should be attached to the deposited research data in the repository. In order to help the selection of the license for the research data, it is recommended to be used the EUDAT B2SHARE tool 13 that includes an integrated license wizard to facilitates the selection.
Must be taken all necessary steps to prevent both publications and research
data from being leaked or hacked in order to not damage the IMPAQT project
plan as well as the opportunities and individual IMPAQT partner plan.
### Compliance with non-EU Partners
Since there are two non-EU countries involved in IMPAQT project (China and
Turkey), all the IMPAQT consortium need to make sure that meet all the ethical
requirements and that EU and national standards for research data obtain the
approval of the local/national data protection 14 authority.
The EU standard to the transfer of data needs to follow the rules of the
General Data Protection Regulation (GDPR). In the People's Republic of China
these rules are included on the Cybersecurity Law of China (CLS) and in Turkey
in their Data Protection Law (DPL).
IMPAQT consortium must pay attention to some guidelines when transferring data
between countries:
* All the EU partners need to follow the GDPR guidelines;
* GDPR makes clear that non-EU partners must follow and comply with GDPR;
* CLS required that to transfer data generated in China to a foreign party, need to be stored within China territory and is firstly necessary to conduct a security assessment covered by a separate regulation;
* DPL allow transfer of data generated in Turkey to a foreign party, it has provided explicit consent or if the destination country has an adequate level of protection where both sides make a writing statement to provide data protection.
If there is an additional need to transfer data, can be established an
agreement through the EC standard contractual clauses 15 for data transfers
between EU and non-EU countries. In the case of multinational companies, if
necessary, can be also established binding corporate rules 16 .
#### 1.3. Data Lifecycle
IMPAQT DMP covers all the data lifecycle steps of the research data generated
or collected in the project and is an important aspect to provide the project
sustainability and also produced opportunities for new emerging applications
and services. The IMPAQT research data can be preserved in different ways and
may also have different data access or use policies.
In a typical project data lifecycle there are some key steps that must to be
considered and the following Figure 3 provides an overview of the IMPAQT Data
Lifecycle that will be applied for each research data.
Figure 3 - IMPAQT Data Lifecycle
* **Data Collection** – The first step is data collection/creation. Data needs to be collected from his origin and kept in a workspace (it is always recommended to make backup);
* **Data Processing** – At this stage, the data must be identified, analysed and processed as well as ensure their quality. It is also advisable to always make a copy of the raw data before start working with them. The analysis of the research data may also require the collection of new data for the same or for other project purposes;
* **Data Storage** – The data need to be organized by specifying and choosing the file formats, their access policy, their metadata and also must be deposited in an online (and also local) repository. When the data is on a repository all the efforts need to be made to allow their long-term preservation;
* **Data Share** – After depositing the data in an online repository, they are available to be accessed and discovered by third parties, and then can be used for other purposes (re-use).
The following section provides the IMPAQT Data Management Plan with detailed
guidelines that must be applied in each stage of the IMPAQT data lifecycle.
**2\. DATA MANAGEMENT PLAN**
### 2.1. Data Collection
When collecting/creating IMPAQT research data, it is recommended to follow
good practices since they may come from different origins and can have many
forms. The most common are text, numeric, audio, code, pictures and videos. It
is recommended to keep the collected data in a workspace and to make backups.
### Data Type
The research data can be classified as qualitative data, when refers to text,
pictures, video, observations, or quantitative data, when refers to numerical
data. Also, data can be categorized as:
* **Observed Data** – Unique data that is collected in real time and cannot be reproduced (such input data from sensors);
* **Experimental Data** – Data derived from laboratory equipment that is subject to controlled conditions (such as gene sequences);
* **Simulated Data** – Data generated from the simulations of test models that study real or theoretical systems (such climate modes);
* **Derived Data** – Data that is a result of the analysis of data or aggregates from various sources.
### Origin of Data
In the scope of IMPAQT, the research data will originate mostly from
measurement data of diverse sources deployed in pilots, from monitoring of
fish and associated quality parameters, from environmental, socioeconomic and
cost-effectiveness analysis data, among others.
During the evolution of the IMPAQT project, the nature of the data may change,
and it is also necessary to specify in the data collection the origin of the
data.
**2.2.** **Data Processing**
### Choose File Format
Once collected/generated and identified, IMPAQT research data should be
analysed and processed. During processing and analysis, may be require the
collection of new data for the same or for other project purposes.
IMPAQT research data must be recorded using digital and user-friendly format.
The choice of an accessible format allows their preservation, access and share
with third parties.
It is recommended to make backup of the raw data in the original format
without being changed or edited and to use open standards formats. In case
IMPAQT research data comes from sources with restrict and own formats, all the
efforts should be made to convert the data to a common standard, preferably an
open access format. In this case must be provided documentation of the
necessary resources such as tools, instruments or software used to access,
visualize and work with the data.
When choosing a file format, it needs to be non-proprietary, have no
encryption, are uncompressed, use an open and documented format that is used
by the community, has common character encodings (e.g., ASCII, UTF-8) and be
adapted for the data type.
Some of the most used formats, according to the European Data Portal 17 ,
are CSV, TXT, HTML, JSON, PDF, XLS and XML. The following Table 1 provides a
set of recommended digital formats to be used based in the respective data
type.
Table 1 - Recommended Digital Formats
<table>
<tr>
<th>
**Type**
</th>
<th>
**Recommended Formats**
</th> </tr>
<tr>
<td>
**Audio**
</td>
<td>
WAV, AIFF, MP3, MP4, FLAC
</td> </tr>
<tr>
<td>
**Video**
</td>
<td>
MOV, MPEG-4, AVI, MP4
</td> </tr>
<tr>
<td>
**Pictures**
</td>
<td>
TIFF, JPEG, JPG, PNG, BMP
</td> </tr>
<tr>
<td>
**Text/Documentation**
</td>
<td>
DOC, TXT, PDF
</td> </tr>
<tr>
<td>
**Scripts/Code**
</td>
<td>
XML, HTML, JSON
</td> </tr>
<tr>
<td>
**Database**
</td>
<td>
XLS, CSV, XML
</td> </tr> </table>
### File identification
It is important from the beginning of IMPAQT project to provide a correct
identification of files where must be used the same structure in both the
active data and the backup data.
Regarding IMPAQT document files (such deliverables) the identification should
follow the guidelines presented in D7.1 Project Quality Plan.
For the identification of IMPAQT research data files it is recommended to use
a descriptive name since its name will reflect the contents of the file and
not use an exaggerated number of characters or special characters, and spaces.
Use only numbers, letters and underscores and if is necessary to use the date
to specify them in a standard format (e.g. “DDMMYYYY”) and for ease
understanding. The attributes to include in the file naming convention for
IMPAQT research data are presented in the following example:
_**“IMPAQT_ LCA_Inventory_Data_v0.1.xls”** _
1. A prefix to specify that is an IMPAQT data (e.g. “IMPAQT”);
2. An intuitive title to the data (e.g. “Dataset Example”);
3. For each new version of the data, specify the respective number (e.g. “v0.1”);
4. The file respective format (e.g.”xls”).
### Provide metadata
Metadata is data that provides information describing other data and the
context in which they were established. Should be sufficient to enable the
data to be discovered, understood, re-used and enable organization and data
sharing/preservation facilitating the use of the data by others and the
combination of information from different origins allowing transparency.
There are many different schemas of metadata that use a Standard Generalized
Markup Language (SGML) or Extensible Markup Language (XML). Metadata schemes
may be developed and maintained by standard organizations that have standards
and schemas designed specifically for their types of data. There are several
services that list metadata standards by subject areas such as the Digital
Curation Centre 18 (DCC) and the Research Data Alliance (RDA) 19 . Dublin
Core metadata standard is a good candidate that cover the basics attributes,
can be used for any type of data and provide an online tool 20 to generate a
fully-formed Dublin Core metadata code.
All metadata needed to identify IMPAQT data must be provided. Regarding
publications, it should be ensured that EU funding is recognized. Information
on EU funding should be included to monitor the H2020 program, the statistics
produced, and the impact of the program evaluated. The publication date and
embargo period must be provided as well as assigned a permanent and unique
identifier, a Digital Object Identifier (DOI), action name, acronym, and Grant
Number.
Regarding research data the following Table 2 provides a set of metadata
attributes that need to be provided to each IMPAQT research data.
Table 2 - IMPAQT research data metadata
<table>
<tr>
<th>
**Metadata**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
**Research Data**
**Name**
</td>
<td>
Title of the IMPAQT research data. Should be a name easily for search
</td> </tr>
<tr>
<td>
**Category Type**
</td>
<td>
Need to be provided the IMPAQT research data category
(e.g. Production Data)
</td> </tr>
<tr>
<td>
**Responsible Partner**
</td>
<td>
The IMPAQT partner that is responsible for his creation/generation
</td> </tr>
<tr>
<td>
**Description & **
**Purpose**
</td>
<td>
Should be provided a description, including the procedures followed to obtain
those results and also his purpose and what will be their benefits
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
Specify the respective format (e.g. CSV, XLS, DOC…)
</td> </tr>
<tr>
<td>
**Expected Size**
</td>
<td>
What is/will be the IMPAQT research data approximated size (e.g. <100 MB, 3GB)
</td> </tr>
<tr>
<td>
**Origin**
</td>
<td>
How and where the IMPAQT research data was generated
</td> </tr>
<tr>
<td>
**Repository**
</td>
<td>
The repository where will be submitted
</td> </tr>
<tr>
<td>
**DOI**
</td>
<td>
Can be specified once the IMPAQT research data has been deposited in the
repository
</td> </tr>
<tr>
<td>
**Version**
</td>
<td>
Specify his version number to keep track all their changes
</td> </tr>
<tr>
<td>
**Keywords**
</td>
<td>
Provide keywords to be associated (e.g. IMPAQT)
</td> </tr>
<tr>
<td>
**Access**
</td>
<td>
Specify the rights to access the IMPAQT research data
</td> </tr> </table>
#### 2.3. Data Storage
Making IMPAQT research data easily findable and identifiable is fundamental.
For that is required active backups and deposit the research data and in a
data repository. It is advisable to make frequent backups and kept source data
(raw data) separate from the ongoing work or final data.
Attention must be taken to its protection (confidential or restricted data),
ease of access and collaboration (internal or external). Choosing a data
repository ensure maximum visibility, serves as backups in case of failures
and guarantee the data availability after the end of the IMPAQT project.
To preserve the data can be used a repository already established for your
research domain but use an external and on-line repository is also required.
Searching for a data repository can be done using the Registry Of Research
Data Repositories Portal 21 that in addition to specific searches, permit to
search repositories by filtering access categories, data usage licenses,
trusted data repositories and whether a repository provides the data with a
persistent identifier.
Some aspects to be considered when choosing a repository are:
* A repository that is trustworthy and keeps the IMPAQT research data available in long-term;
* Provides all the means to match the IMPAQT project requirements (e.g. formats; access, back-up and recovery);
* Allows a unique identifier to be assigned to IMPAQT research data to ensure that different search results are linked to specific searchers and grants;
* Provides a landing page for each IMPAQT research data, with metadata that enables their search visible and stimulating their data re-use;
* Offers clear terms and conditions (e.g. for data protection) and allows for re-use; Offers statistics that allow tracking how the IMPAQT research data was used.
Data stored in the repository must have the highest quality and provide
documentation to be accessed. The data need to be updated to ensure that the
latest version is available. All methods to provide security measures and
backup policies to ensure the maintenance of IMPAQT research data must be
taken.
It is advisable to use in IMPAQT the Zenodo 22 that is a cost-free
repository developed by CERN 23 through OpenAIRE 24 project that allows
the deposit of both publications and research data and facilitates link
publications and underlying data through persistent identifiers and data
citations.
Zenodo also follows the minimum metadata standards, stored them internally in
JSONformat, according to a defined JSON schema and can be exported in several
standard formats such as MARCXML, Dublin Core, and DataCite Metadata Schema.
Also provide a technical infrastructure that allows data security and long-
term preservation. The data files can be deposited in closed access, open or
restricted. Data files deposited under close access are protected from
unauthorized access at all levels. Data files deposited as restricted may be
shared with others if certain requirements are confirmed. These files will not
be made publicly available and the share will only be possible with the
approval.
The IMPAQT research data need to be stored during the period of the project
and must be preserved for at least 5 years after the end of the IMPAQT
project. Any unexpected costs related to open access to research data are
authorized for compensation during the duration of the project under the
conditions defined in the GA.
#### 2.4. Data Share
According to the IMPAQT GA, research data should be owned by who generated
them, and IMPAQT consortium intends to provide early as possible publicly
available, easily discovered and re-used unless there is legitimate interest
to protect them. This approach aims to maximize the IMPAQT visibility, the
exploration of its results, the long-term impact and allow that other
researchers make use of them to validate the IMPAQT results, as a starting
point for their investigations.
Interoperability and data re-use must be provided following the FAIR
principles. The research data should be shared in an easy and transparent way
to ensure that can be understood and accessed by other researchers,
institutions, and organizations, along with all the metadata and documentation
available.
Since a huge amount of research data is generated in a project such as IMPAQT,
must be selected which research data can be shared and as well as specified
their access type (public, restricted, private, other).
Several publications will be carried out during the IMPAQT project and should
be explored “Gold” and “Green” Open Access practices. Highly impacting and
cited publications need to be provided through "Gold" Open Access to ensure
maximum visibility and immediate availability. The research data required to
validate the results of scientific publications (underlying data) should be
deposited in the chosen repository at the same time and as soon as possible,
unless a decision has been made to protect them.
If it is necessary to assign restricted access to the data, all the efforts
must be made to make data available to other researchers under controlled
conditions, and all methods and software tools to access the data must be also
provided.
To increase data re-use, is required to make all the efforts to use standards
and formats that are compatible with common (and preferably free) software
applications. The IMPAQT research data need to be available through the
repository and need to be accessible for reuse without limitations during and
after the end of the IMPAQT project.
**3\. IMPAQT DATASET**
IMPAQT will generate several research data during the project life cycle that
are represented as datasets. These datasets can be conserved in different
ways, where can be imposed some data access or use policies as mentioned in
the previous sections. They can also be analysed through a wide range of
perspectives for the project development as well as for other scientific
purposes.
The following subsections identifies the available IMPAQT datasets at M6 of
the project where for each one was collected relevant information regarding
the topics of the IMPAQT Dataset Template presented in “ANNEX B: IMPAQT
Dataset template”.
### 3.1. MI Pilot Site
<table>
<tr>
<th>
**Dataset Name**
</th>
<th>
MI Pilot Site
</th> </tr>
<tr>
<td>
**Responsible & **
**Collector**
</td>
<td>
MI (in cooperation with the second pilot site owners Keywater Ltd)
</td> </tr>
<tr>
<td>
**Category Type**
</td>
<td>
Environmental Monitoring
</td> </tr>
<tr>
<td>
**Description & **
**Purpose**
</td>
<td>
Environmental monitoring data collected from the sensors deployed on the pilot
site and that will be used to inform management decisions.
It will be used to inform the IMPAQT platform.
These data are currently stored in various formats on MI servers and are kept
for historical record and to inform research work
</td> </tr>
<tr>
<td>
**Formats**
</td>
<td>
.XLS files, ERDAP
</td> </tr>
<tr>
<td>
**Expected Size**
</td>
<td>
The volume size will depend on the data. It is expected to be < 1GB
</td> </tr>
<tr>
<td>
**Metadata & **
**Standards**
</td>
<td>
None are expected
</td> </tr>
<tr>
<td>
**Data Sharing**
** & Access **
</td>
<td>
Non-strategic and environmental data will be shared openly where there is no
commercially sensitive restrictions. Currently elements of the environmental
data is shared on ERDAP
</td> </tr>
<tr>
<td>
</td>
<td>
( _https://erddap.marine.ie/erddap/index.html_ ) and it is expected to add
new IMPAQT data to this (or other open access locations as suggested by WP
leaders) as the project progresses. Data will be shared openly between the
partners.
Relevant data will be included in any publications or made available as a
supplementary to publications.
</td> </tr>
<tr>
<td>
**Quality**
**Assurance**
</td>
<td>
To ensure that quality is maintained the data will be validated when
downloading
</td> </tr>
<tr>
<td>
**Preservation**
** & Archiving **
</td>
<td>
To ensure archiving, preservation and security the data will be stored on MI
servers for an indefinite period of time with no associated costs, as well as
being available on ERDAP.
</td> </tr> </table>
### 3.2. NSF Data 1
<table>
<tr>
<th>
**Dataset Name**
</th>
<th>
NSF Data 1
</th> </tr>
<tr>
<td>
**Responsible & **
**Collector**
</td>
<td>
NSF
</td> </tr>
<tr>
<td>
**Category Type**
</td>
<td>
Environmental Monitoring
</td> </tr>
<tr>
<td>
**Description & **
**Purpose**
</td>
<td>
Environmental monitoring data (biotic, abiotic and metocean) collected from
the sensors deployed on the pilot site and that will be used to inform
management decisions. It will be used to inform the IMPAQT platform.
These data are currently stored in various formats on NSF servers and are kept
for historical record and to inform research work
</td> </tr>
<tr>
<td>
**Formats**
</td>
<td>
.XLS files
</td> </tr>
<tr>
<td>
**Expected Size**
</td>
<td>
The dataset volume size is expected to be 500 MB
</td> </tr>
<tr>
<td>
**Metadata & **
**Standards**
</td>
<td>
None are expected
</td> </tr>
<tr>
<td>
**Data Sharing**
** & Access **
</td>
<td>
Publicly available upon request - We will communicate clearly that the
information is available for use. Upon contacting we will distribute the data
with the proper guidance for the intended use by the requestee
</td> </tr>
<tr>
<td>
**Quality**
**Assurance**
</td>
<td>
To ensure that quality is maintained the data will be validated when
downloading
</td> </tr>
<tr>
<td>
**Preservation**
** & Archiving **
</td>
<td>
To ensure archiving, preservation and security the data will be stored on NSF
servers until 5 years from the end of the project with none practical costs
associated
</td> </tr> </table>
### 3.3. NSF Data 2
<table>
<tr>
<th>
**Dataset Name**
</th>
<th>
NSF Data 2
</th> </tr>
<tr>
<td>
**Responsible & **
**Collector**
</td>
<td>
NSF
</td> </tr>
<tr>
<td>
**Category Type**
</td>
<td>
Hydrodynamic Monitoring
</td> </tr>
<tr>
<td>
**Description & **
**Purpose**
</td>
<td>
Hydrodynamic monitoring data (forces on the system) collected from the sensors
deployed on the pilot site and that will be used to inform management
decisions. It will be used to inform the IMPAQT platform. These data are
currently stored in various formats on NSF servers and are kept for historical
record and to inform research work
</td> </tr>
<tr>
<td>
**Formats**
</td>
<td>
.XLS files
</td> </tr>
<tr>
<td>
**Expected Size**
</td>
<td>
The dataset volume size is expected to be 500 MB
</td> </tr>
<tr>
<td>
**Metadata & **
**Standards**
</td>
<td>
None are expected
</td> </tr>
<tr>
<td>
**Data Sharing**
** & Access **
</td>
<td>
Publicly available upon request - We will communicate clearly that the
information is available for use. Upon contacting we will distribute the data
with the proper guidance for the intended use by the requestee
</td> </tr>
<tr>
<td>
**Quality**
**Assurance**
</td>
<td>
To ensure that quality is maintained the data will be validated when
downloading
</td> </tr>
<tr>
<td>
**Preservation**
** & Archiving **
</td>
<td>
To ensure archiving, preservation and security the data will be stored on NSF
servers until 5 years from the end of the project with none practical costs
associated
</td> </tr> </table>
### 3.4. LCA Inventory Data
<table>
<tr>
<th>
**Dataset Name**
</th>
<th>
LCA Inventory Data
</th> </tr>
<tr>
<td>
**Responsible & **
**Collector**
</td>
<td>
LEITAT (also in cooperation with each pilot site owners)
</td> </tr>
<tr>
<td>
**Category Type**
</td>
<td>
Environmental Monitoring
</td> </tr>
<tr>
<td>
**Description & **
**Purpose**
</td>
<td>
The LCA Inventory Data provide inputs and outputs data collected from the
pilots that is related to the process to allow quantifying of the
environmental impacts
</td> </tr>
<tr>
<td>
**Formats**
</td>
<td>
.XLS files
</td> </tr>
<tr>
<td>
**Expected Size**
</td>
<td>
The dataset volume size is expected to be 500 KB
</td> </tr>
<tr>
<td>
**Metadata & **
**Standards**
</td>
<td>
None are expected. Nevertheless, the data collected from the pilots will be
aligned with ISO 14044 (LCA)
</td> </tr>
<tr>
<td>
**Data Sharing**
** & Access **
</td>
<td>
The access policy to the LCA Inventory Data should be restricted since it will
have sensible information related to resources consumption.
The results reported by the LCA software will be in .XLS format but will be
also integrated in a .PDF document
</td> </tr>
<tr>
<td>
**Quality**
**Assurance**
</td>
<td>
To ensure that quality is maintained the data come from direct measured based
on standards, expert judgement and data published in scientific papers
</td> </tr>
<tr>
<td>
**Preservation**
** & Archiving **
</td>
<td>
To ensure archiving, preservation and security the dataset edition will be
limited. The data will be stored for an indefinite period of time with none
associated costs
</td> </tr> </table>
### 3.5. Envi-Vars
<table>
<tr>
<th>
**Dataset Name**
</th>
<th>
Envi-Vars
</th> </tr>
<tr>
<td>
**Responsible & **
**Collector**
</td>
<td>
CAMLI and DEU
</td> </tr>
<tr>
<td>
**Category Type**
</td>
<td>
Environmental Monitoring
</td> </tr>
<tr>
<td>
**Description & **
**Purpose**
</td>
<td>
Environmental variables that will be used to represent and detect any changes
in the biogeochemical status of the monitoring pilot site. Mostly collected
automatically in situ by sensors and/or manually for post-processing procedure
at labs.
All data will be stored digitally in hard drives with an appropriate database
format
</td> </tr>
<tr>
<td>
**Formats**
</td>
<td>
.XLS files and will be also used high resolution microscopic images
</td> </tr>
<tr>
<td>
**Expected Size**
</td>
<td>
The dataset volume size is expected to be several GBs
</td> </tr>
<tr>
<td>
**Metadata & **
**Standards**
</td>
<td>
None are expected. Nevertheless, the data collected from the pilots will be
aligned with ISO 14044 (LCA)
</td> </tr>
<tr>
<td>
**Data Sharing**
** & Access **
</td>
<td>
The access policy is restricted until the publication are submitted. It can be
accessed thought .XLS files
</td> </tr>
<tr>
<td>
**Quality**
**Assurance**
</td>
<td>
To ensure that quality is maintained for the data measured by sensors, the
calibration parameters of sensors have to be archived for each calibration
performed (including dates).
For the data measured in lab, the necessary methodological coefficients (facts
and absorbance values of standards, etc.) have to be archived
</td> </tr>
<tr>
<td>
**Preservation**
** & Archiving **
</td>
<td>
To ensure archiving, preservation and security the data will be backing up on
CDs, hard drives and clouds until 5 years from the end of the project with
none practical costs associated
</td> </tr> </table>
### 3.6. Data of Production Operations
<table>
<tr>
<th>
**Dataset Name**
</th>
<th>
Data of Production Operations
</th> </tr>
<tr>
<td>
**Responsible & **
**Collector**
</td>
<td>
CAMLI and DEU
</td> </tr>
<tr>
<td>
**Category Type**
</td>
<td>
Production Data
</td> </tr>
<tr>
<td>
**Description & **
**Purpose**
</td>
<td>
Data representing the production operations at the pilot sites that have to be
coupled with the environmental data for analysing and establishing of an
integrated management model.
Some of these data are: feeding (amount/frequency), mortality (collection of
death fish/mussel), net/rope changes (frequency), size grading (fish), medical
treatment (drug/vaccine uses) in terms of dosages and duration, and others.
</td> </tr>
<tr>
<td>
**Formats**
</td>
<td>
.XLS files
</td> </tr>
<tr>
<td>
**Expected Size**
</td>
<td>
The dataset volume size is expected to be several GBs
</td> </tr>
<tr>
<td>
**Metadata & **
**Standards**
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
**Data Sharing**
** & Access **
</td>
<td>
Access policy has to restricted with the partners of the tasks in which the
data is going to be used. To be accessed it is also a confidentiality
agreement among users
</td> </tr>
<tr>
<td>
**Quality**
**Assurance**
</td>
<td>
To ensure that quality is maintained the data will be alignment with the
planned production plan
</td> </tr>
<tr>
<td>
**Preservation**
** & Archiving **
</td>
<td>
Each pilot will be responsible to ensure archiving, preservation and security
of the data through baking up on hard drives and cloud until 5 years from the
end of the project (with an associated cost of 1 PM per year for each pilot
site)
</td> </tr> </table>
## 3.7. Environmental Dataset
<table>
<tr>
<th>
**Dataset Name**
</th>
<th>
Environmental Dataset
</th> </tr>
<tr>
<td>
**Partner**
**Responsible**
</td>
<td>
CAMLI
</td> </tr>
<tr>
<td>
**Category Type**
</td>
<td>
Environmental Monitoring
</td> </tr>
<tr>
<td>
**Description & **
**Purpose**
</td>
<td>
Measuring of environmental parameters in order to understand the functioning
of IMTA and environmental impact assessment of fish production.
Some environmental parameters collected are temperature, oxygen level, water
currency direction and speed, ammonia, phosphorus, light intensity, turbidity,
meteorological conditions (wind, wave strength, direction), and others
</td> </tr>
<tr>
<td>
**Formats**
</td>
<td>
.XLS files
</td> </tr>
<tr>
<td>
**Sizes**
</td>
<td>
The dataset volume size will depend on the data. It is expected to be
< 1GB
</td> </tr>
<tr>
<td>
**Metadata & **
**Standards**
</td>
<td>
None are expected. Nevertheless, the data collected from the pilots will be
aligned with ISO 14044 (LCA)
</td> </tr>
<tr>
<td>
**Data Sharing**
** & Access **
</td>
<td>
The access policy is restricted. The Data also need to be harmonised before
opened to public for 3 years and will be available thought .XLS files
</td> </tr>
<tr>
<td>
**Quality**
**Assurance**
</td>
<td>
Data will be archived in IMST and CAMLI's own cloud and data system. To ensure
that quality is maintained for the data measured by sensors, the calibration
parameters of sensors have to be archived for each calibration performed
(including dates).
</td> </tr>
<tr>
<td>
**Preservation**
** & Archiving **
</td>
<td>
Each pilot and DEU-IMST will be responsible to ensure archiving, preservation
and security of the data through baking up on hard drives and cloud until 5
years from the end of the project with none practical costs associated
</td> </tr> </table>
## 3.8. IMTA Species Dataset
<table>
<tr>
<th>
**Dataset Name**
</th>
<th>
IMTA Species Dataset
</th> </tr>
<tr>
<td>
**Responsible & **
**Collector**
</td>
<td>
CAMLI
</td> </tr>
<tr>
<td>
**Category Type**
</td>
<td>
Production Data
</td> </tr>
<tr>
<td>
**Description & **
**Purpose**
</td>
<td>
Data of species cited in a culture system that need to be followed. This
dataset regard data such growth condition of the species, welfare conditions
and stress conditions
</td> </tr>
<tr>
<td>
**Formats**
</td>
<td>
.XLS files
</td> </tr>
<tr>
<td>
**Expected Size**
</td>
<td>
The volume size will depend on the data. It is expected to be < 1GB
</td> </tr>
<tr>
<td>
**Metadata & **
**Standards**
</td>
<td>
None are expected
</td> </tr>
<tr>
<td>
**Data Sharing**
** & Access **
</td>
<td>
The access policy is restricted. The Data also need to be harmonised before
opened to public for 3 years and will be available thought .XLS files
</td> </tr>
<tr>
<td>
**Quality**
**Assurance**
</td>
<td>
Data will be archived in IMST and CAMLI's own cloud and data system. To ensure
that quality is maintained for the data measured by sensors, the calibration
parameters of sensors have to be archived for each calibration performed
(including dates).
</td> </tr>
<tr>
<td>
**Preservation**
** & Archiving **
</td>
<td>
Each pilot site will be responsible to ensure archiving, preservation and
security of the data through baking up on hard drives and cloud until 5 years
from the end of the project with none practical costs associated
</td> </tr> </table>
## 3.9. Food Safety Dataset
<table>
<tr>
<th>
**Dataset Name**
</th>
<th>
Food Safety Dataset
</th> </tr>
<tr>
<td>
**Responsible & **
**Collector**
</td>
<td>
CAMLI
</td> </tr>
<tr>
<td>
**Category Type**
</td>
<td>
Food Safety Tests
</td> </tr>
<tr>
<td>
**Description & **
**Purpose**
</td>
<td>
Since the species produced in IMTA will be used for human consumption, the
Food Safety Dataset will be used to perform food safety analysis, such as
heavy metals, as well as to help monitor food safety
</td> </tr>
<tr>
<td>
**Formats**
</td>
<td>
.XLS and .PDF files
</td> </tr>
<tr>
<td>
**Expected Size**
</td>
<td>
The volume size will depend on the data. It is expected to be < 1GB
</td> </tr>
<tr>
<td>
**Metadata & **
**Standards**
</td>
<td>
None are expected
</td> </tr>
<tr>
<td>
**Data Sharing**
** & Access **
</td>
<td>
The access policy is restricted. The Data also need to be harmonised before
opened to public for 3 years and will be available thought .XLS and .PDF files
</td> </tr>
<tr>
<td>
**Quality**
**Assurance**
</td>
<td>
Data will be archived in IMST and CAMLI's own cloud and data system. To ensure
that quality is maintained for the data measured by sensors, the calibration
parameters of sensors have to be archived for each calibration performed
(including dates). Analyses and methods performed in accredited labs are will
be archived in the same system.
</td> </tr>
<tr>
<td>
**Preservation**
** & Archiving **
</td>
<td>
Each pilot site will be responsible to ensure archiving, preservation and
security of the data through baking up on hard drives and cloud until 5 years
from the end of the project with none practical costs associated
</td> </tr> </table>
## 3.10. OptData
<table>
<tr>
<th>
**Dataset Name**
</th>
<th>
OptData
</th> </tr>
<tr>
<td>
**Responsible & **
**Collector**
</td>
<td>
IOPAN
</td> </tr>
<tr>
<td>
**Category Type**
</td>
<td>
Environmental Monitoring
</td> </tr>
<tr>
<td>
**Description & **
**Purpose**
</td>
<td>
Measurements of several water parameters, mainly these that are optically
significant, collected during field campaigns on the pilot sites. Data will be
used for development and validation of the satellite algorithms and products
</td> </tr>
<tr>
<td>
**Formats**
</td>
<td>
.CSV files
</td> </tr>
<tr>
<td>
**Expected Size**
</td>
<td>
The dataset volume size is expected to be to be < 1GB
</td> </tr>
<tr>
<td>
**Metadata & **
**Standards**
</td>
<td>
None are expected
</td> </tr>
<tr>
<td>
**Data Sharing**
** & Access **
</td>
<td>
The access policy is restricted until the publication are submitted. It can be
accessed thought .CSV files
</td> </tr>
<tr>
<td>
**Quality**
**Assurance**
</td>
<td>
To ensure that quality is maintained for the collected data measured by
various instruments, the calibration parameters of sensors have to be archived
for each calibration performed, and the measurements have to be carried out
with the state of the art and well-documented procedures and standards
</td> </tr>
<tr>
<td>
**Preservation**
** & Archiving **
</td>
<td>
To ensure archiving, preservation and security the data will be stored on
IOPAN servers until 5 years from the end of the project with none associated
costs
</td> </tr> </table>
## 3.11. SatData
<table>
<tr>
<th>
**Dataset Name**
</th>
<th>
SatData
</th> </tr>
<tr>
<td>
**Responsible & **
**Collector**
</td>
<td>
IOPAN
</td> </tr>
<tr>
<td>
**Category Type**
</td>
<td>
Environmental Monitoring
</td> </tr>
<tr>
<td>
**Description & **
**Purpose**
</td>
<td>
Satellite products, based on locally developed algorithms, presenting temporal
and spatial variability of selected environmental parameters, supporting the
monitoring of the pilot sites
</td> </tr>
<tr>
<td>
**Formats**
</td>
<td>
netCDF files
</td> </tr>
<tr>
<td>
**Expected Size**
</td>
<td>
The dataset volume size is expected to be several GBs
</td> </tr>
<tr>
<td>
**Metadata & **
**Standards**
</td>
<td>
None are expected
</td> </tr>
<tr>
<td>
**Data Sharing**
** & Access **
</td>
<td>
The access policy is restricted. The Data also need to be harmonised before
opened to public and will be available thought netCDF and
GeoTIFF files
</td> </tr>
<tr>
<td>
**Quality**
**Assurance**
</td>
<td>
To ensure that quality is maintained the presented satellite products have to
be calculated with locally developed algorithms and validated with high-
quality measurements on the pilot sites. The atmospheric corrections of ocean
colour data should also be validated, possibly at each pilot site
</td> </tr>
<tr>
<td>
**Preservation**
** & Archiving **
</td>
<td>
To ensure archiving, preservation and security the data will be stored on
IOPAN servers until 5 years from the end of the project with none associated
costs
</td> </tr> </table>
## 3.12. Daily Meteo harvestering MIC
<table>
<tr>
<th>
**Dataset Name**
</th>
<th>
Daily Meteo harvestering MIC
</th> </tr>
<tr>
<td>
**Responsible & **
**Collector**
</td>
<td>
Unparallel Innovation
</td> </tr>
<tr>
<td>
**Category Type**
</td>
<td>
Environmental Monitoring
</td> </tr>
<tr>
<td>
**Description & **
**Purpose**
</td>
<td>
Gathering real deployment data about Meteorological conditions per day at
specific location (MIC pilot) to correlate / validate with theoretical data
(e.g. EU tool). To assist in the choice and specifications of the energy
harvesting system due to the demand of energy requirements and weather
conditions.
</td> </tr>
<tr>
<td>
**Formats**
</td>
<td>
.CSV files
</td> </tr>
<tr>
<td>
**Expected Size**
</td>
<td>
aprox. 1 MB per day
</td> </tr>
<tr>
<td>
**Metadata & **
**Standards**
</td>
<td>
None are expected
</td> </tr>
<tr>
<td>
**Data Sharing**
** & Access **
</td>
<td>
Public
</td> </tr>
<tr>
<td>
**Quality**
**Assurance**
</td>
<td>
To ensure the quality: Validation / correlation with theoretical sources (e.g.
EU tool and wunderground site).
</td> </tr>
<tr>
<td>
**Preservation**
** & Archiving **
</td>
<td>
Data stored on Unparallel Innovation servers
</td> </tr> </table>
## 3.13. SGB dataset
<table>
<tr>
<th>
**Dataset Name**
</th>
<th>
SGB dataset
</th> </tr>
<tr>
<td>
**Responsible & **
**Collector**
</td>
<td>
YSFRI
</td> </tr>
<tr>
<td>
**Category Type**
</td>
<td>
Environmental Monitoring
</td> </tr>
<tr>
<td>
**Description & **
**Purpose**
</td>
<td>
Environmental monitoring data collected from the platform deployed on the
Sanggou Bay site and that will be used to inform management decisions. The
data are currently stored in various formats on the servers maintained by
RobotFish and YSFRI. The new data since
IMPAQT project is not integrated yet
</td> </tr>
<tr>
<td>
**Formats**
</td>
<td>
.xls & netCDF
</td> </tr>
<tr>
<td>
**Expected Size**
</td>
<td>
depend on the data. Expected to be <1GB
</td> </tr>
<tr>
<td>
**Metadata & **
**Standards**
</td>
<td>
None are expected
</td> </tr>
<tr>
<td>
**Data Sharing**
** & Access **
</td>
<td>
Restricted until publications are submitted and permitted by I.T department of
local government
</td> </tr>
<tr>
<td>
**Quality**
**Assurance**
</td>
<td>
Quality Assurance Procedures
</td> </tr>
<tr>
<td>
**Preservation**
** & Archiving **
</td>
<td>
Data stored on RobotFish/YSFRI severs
</td> </tr> </table>
## 3.14. SAMS pilot site
<table>
<tr>
<th>
**Dataset Name**
</th>
<th>
SAMS pilot site
</th> </tr>
<tr>
<td>
**Responsible & **
**Collector**
</td>
<td>
SAMS
</td> </tr>
<tr>
<td>
**Category Type**
</td>
<td>
Environmental Monitoring
</td> </tr>
<tr>
<td>
**Description & **
**Purpose**
</td>
<td>
Environmental monitoring data collected from the sensors deployed on the pilot
site and that will be used to inform management decisions. It will be used to
inform the IMPAQT platform. Restricted access data kept on SAMS in-house
servers (accounting for full permissions, security) following defined internal
data preservation. Controlled access
</td> </tr>
<tr>
<td>
**Formats**
</td>
<td>
.xls
</td> </tr>
<tr>
<td>
**Expected Size**
</td>
<td>
approx >1GB
</td> </tr>
<tr>
<td>
**Metadata & **
**Standards**
</td>
<td>
None are expected
</td> </tr>
<tr>
<td>
**Data Sharing**
** & Access **
</td>
<td>
Restricted and not in publicly accessible format. This is because the data is
being kept on SAMS secure serve only accessible to SAMS staff but upon request
we are happy to share the data
</td> </tr>
<tr>
<td>
**Quality**
**Assurance**
</td>
<td>
To ensure that quality is maintained the data will be validated when
downloading
</td> </tr>
<tr>
<td>
**Preservation**
** & Archiving **
</td>
<td>
Data stored on SAMS servers
</td> </tr> </table>
The identified IMPAQT Datasets are mostly originated from diverse sources
deployed in the IMPAQT pilots. These datasets include a wide range of
significant information such as quality parameters, hydrodynamic monitoring
data, environmental monitoring data, production data, food safety tests data
and others.
During the development of the IMPAQT project new purposes and objectives can
arise and it may be necessary generate and/or collect new IMPAQT datasets that
will be identified by the consortium. IMPAQT consortium will also ensure a
database always updated with all the identified IMPAQT datasets.
# CONCLUSION
IMPAQT Data Management Plan result from the work developed in WP6 within the
first 6 months of the IMPAQT project that is funded from the EU H2020 research
and innovation programme under Grant Agreement No 774109\.
This Data Management Plan is in accordance with the European Commission
guidelines an also with the data principles of the section 1.2. In the scope
of Open Access, all the efforts will be made to ensure the right to copy,
distribute and easily search both scientific publication and research data.
IMPAQT consortium is conscious of the directives for Open Access of
Publications and Research Data in H2020 projects. IMPAQT will published
whenever possible under Open Access procedures since believes that make the
research data FAIR (findable, accessible, interoperable re-usable) and
available with few restrictions as possible can make a significant
contribution to economic growth.
As stated in the document, the aim of the consortium is to have non-strategic
datasets as publicly available datasets by the end of the project. The
consortium is working on an online list with IMPAQT datasets identified,
detailing the publicly available and restricted ones (including the details on
why these are restricted). A version of this list (with the public IMPAQT
datasets) will be made publicly available, while the complete version will
only be available to the consortium and EC. Both versions of this list will be
online during April 2020.
Section 2 cover all the data lifecycle steps of the DMP that will be applied
in order to describe each research data that will be generated or gathered
during the IMPAQT project and that will help to provide the sustainability of
the project. These steps are Data Collection (regarding how to collect and
generate IMPAQT data), Data Processing (methods to identify, analyse and
process data), Data Storage (how to deposit research data in an online
repository) and Data Share (the steps to ensure that data can be accessed,
discovered and re-used by third parties).
This deliverable also includes in section 3 the identification of the
available IMPAQT Datasets, mostly originated from diverse sources deployed in
the IMPAQT pilots and that includes several relevant information as quality
parameters, hydrodynamic monitoring data, environmental monitoring data,
production data, food safety tests data and others. During the development of
the project, IMPAQT consortium will ensure a database always updated with all
the identified IMPAQT datasets.
# FAIR DATA MANAGEMENT
The research data should be 'FAIR': findable, accessible, interoperable and
re-usable. The following questions were extracted from the "Guidelines on FAIR
Data Management in Horizon 2020 25 " document in order to assist IMPAQT
consortium to make the project research data FAIR.
**FAIR Data**
### 1- Make data Findable (including provisions for metadata)
* Are the data produced and/or used in the project discoverable with metadata, identifiable and locatable by means of a standard identification mechanism (e.g.
persistent and unique identifiers such as Digital Object Identifiers)?
* What naming conventions do you follow?
* Will search keywords be provided that optimize possibilities for re-use?
* Do you provide clear version numbers?
* What metadata will be created? In case metadata standards do not exist in your discipline, please outline what type of metadata will be created and how.
### 2- Make data Openly Accessible
* Which data produced and/or used in the project will be made openly available as the default? If certain datasets cannot be shared (or need to be shared under restrictions), explain why, clearly separating legal and contractual reasons from voluntary restrictions.
* Note that in multi-beneficiary projects it is also possible for specific beneficiaries to keep their data closed if relevant provisions are made in the consortium agreement and are in line with the reasons for opting out.
* How will the data be made accessible (e.g. by deposition in a repository)? What methods or software tools are needed to access the data?
* Is documentation about the software needed to access the data included? Is it possible to include the relevant software (e.g. in open source code)?
* Where will the data and associated metadata, documentation and code be deposited? Preference should be given to certified repositories which support open access where possible.
* Have you explored appropriate arrangements with the identified repository?
* If there are restrictions on use, how will access be provided?
* Is there a need for a data access committee?
* Are there well described conditions for access (i.e. a machine-readable license)?
How will the identity of the person accessing the data be ascertained?
### 3- Make data Interoperable
* Are the data produced in the project interoperable, that is allowing data exchange and re-use between researchers, institutions, organisations, countries, etc. (i.e. adhering to standards for formats, as much as possible compliant with available (open) software applications, and in particular facilitating re-combinations with different datasets from different origins)?
* What data and metadata vocabularies, standards or methodologies will you follow to make your data interoperable?
* Will you be using standard vocabularies for all data types present in your data set, to allow inter-disciplinary interoperability?
* In case it is unavoidable that you use uncommon or generate project specific ontologies or vocabularies, will you provide mappings to more commonly used ontologies?
### 4- Increase data Re-use (through clarifying licences)
* How will the data be licensed to permit the widest re-use possible?
* When will the data be made available for re-use? If an embargo is sought to give time to publish or seek patents, specify why and how long this will apply, bearing in mind that research data should be made available as soon as possible.
* Are the data produced and/or used in the project useable by third parties, in particular after the end of the project? If the re-use of some data is restricted, explain why.
* How long is it intended that the data remains re-usable? Are data quality assurance processes described?
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1383_CO-CREATE_774210.md
|
# Introduction
This Data Management Plan (DMP) is created for the project CO-CREATE –
Confronting Obesity: Co-creating policy with youth. Project details:
* Call: H2020-SFS-2016-2017
* Topic: SFS-39-2017
* Proposal number: 774210
The consortium collaborating in CO-CREATE includes members from 14
institutions from 9 different countries. In order to appropriately respond to
the identified research questions, the project will re-use survey data from
other studies, and also collect data from three stakeholder groups; young
people (16-18 years old), scientific experts on adolescent obesity, and policy
makers and business leaders. The data types that will be collected include
system maps, interview data, and survey data. Moreover, system modelling maps
will be produced based on the system maps and existing research evidence, as
will benchmarking scores for policy effects on individual level behaviour.
* This DMP contains the general principles for handling of each of the different data sources to be used in this project.
* Each partner is required to follow the principles outlined in this DMP, in protocols supporting the DMP, and in the signed Data Protection Agreement.
* Each partner is required to collect and store data in accordance with national law, and specific regulations of each partner institution. This also includes research ethics.
* The project will use a solution for secure processing of sensitive personal data called SAFE. SAFE is based on “Norwegian Code of conduct for information security in the health and care sector” (Normen) and ensures confidentiality, integrity, and availability are preserved when processing sensitive personal. Within this system the data will be anonymised before data are shared through open access.
* The data management centre at the University of Bergen will merge data from the different partners to build an international datafile for data analyses, document the data collected by each partner and openly share data according to details outlined below.
* In the first part, general administrative information about CO-CREATE will be presented, followed by a description of each data source.
\-
# 1\. Administrative information
## 1.1 Project name
Confronting Obesity: Co-creating policy with youth (CO-CREATE)
## 1.2 Project Description
CO-CREATE aims to reduce childhood obesity and its co-morbidities by working
with adolescents to create, inform and disseminate evidence-based obesity
prevention policies. The project applies a systems approach to provide a
better understanding of how factors associated with obesity interact at
various levels. The project focuses on young people within the ages of 16-18
as the specific target group, a crucial age with increasing autonomy and the
next generation of adults, parents and policymakers, and thus important agents
for change. CO-CREATE aims to involve and empower young people themselves, as
well as youth organizations, to foster a participatory process of identifying
and formulating relevant policies, deliberating such options with other
private and public actors, promoting relevant policy agenda and tools and
strategies for implementation. CO-CREATE strengthens interdisciplinary
research, and it has an inclusive multi-actor approach with involvement of
academics, policy makers, civil society, relevant industry and market actors
to ensure long-lasting implementation of the results. The project has a strong
gender profile, and considers the relevance of geographic, socio-economic,
behavioural and cultural factors. CO-CREATE engages with international
partners from different policy-contexts in Europe, Australia, South Africa and
the US. Applying largescale datasets, policy monitoring tools, novel
analytical approaches and youth involvement will provide new efficient
strategies, tools and programs for promoting sustainable and healthy dietary
behaviours and lifestyles.
## 1.3 PI / Researcher (person, institution or organisation)
Coordinator (PI): Prof. Knut-Inge Klepp, Norwegian Institute of Public Health,
Norway. Phone:
004721078052 email: [email protected]_ ORCID:
_https://orcid.org/0000-0002-3181-6841_
## 1.4 Participating researchers and/or organizations
Due to the size of the project, only the participant institutions and the
work-package leaders are listed below.
Participating Organisations:
<table>
<tr>
<th>
No.
</th>
<th>
Name
</th>
<th>
Country
</th> </tr>
<tr>
<td>
1
</td>
<td>
Norwegian Institute of Public Health (NIPH)
</td>
<td>
Norway
</td> </tr>
<tr>
<td>
2
</td>
<td>
University of Amsterdam (UvA)
</td>
<td>
Netherlands
</td> </tr>
<tr>
<td>
3
</td>
<td>
University of Oslo (UiO)
</td>
<td>
Norway
</td> </tr>
<tr>
<td>
4
</td>
<td>
World Obesity Federation (IASO-IOTF)
</td>
<td>
UK
</td> </tr>
<tr>
<td>
5
</td>
<td>
London School of Hygiene & Tropical Medicine (LSHTM)
</td>
<td>
UK
</td> </tr> </table>
<table>
<tr>
<th>
6 University of Cape Town (UCT)
</th>
<th>
South Africa
</th>
<th>
</th> </tr>
<tr>
<td>
7 Centro de Estudos e Investigacao em Dinamicas Sociais e Saude (CEIDSS)
</td>
<td>
Portugal
</td>
<td>
</td> </tr>
<tr>
<td>
8 World Cancer Research Fund International (WCRF)
</td>
<td>
Belgium
</td>
<td>
</td> </tr>
<tr>
<td>
9 EAT
</td>
<td>
Norway
</td>
<td>
</td> </tr>
<tr>
<td>
10 The University of Texas School of Public Health (UTHealth)
</td>
<td>
USA
</td>
<td>
</td> </tr>
<tr>
<td>
11 Press (Press)
</td>
<td>
Norway
</td>
<td>
</td> </tr>
<tr>
<td>
12 University of Bergen (UiB)
</td>
<td>
Norway
</td>
<td>
</td> </tr>
<tr>
<td>
13 SWPS University of Social (SWPS University)
</td>
<td>
Sciences and Humanities
</td>
<td>
Poland
</td>
<td>
</td> </tr>
<tr>
<td>
14 Deakin University (DEAKIN)
</td>
<td>
</td>
<td>
Australia
</td>
<td>
</td> </tr>
<tr>
<td>
Work-package leaders:
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
WP No.
</td>
<td>
Name
</td>
<td>
Name and E-mail
</td>
<td>
</td>
<td>
Org. No.
</td> </tr>
<tr>
<td>
1
</td>
<td>
Project management and coordination
</td>
<td>
Therese Bakke [email protected]_
</td>
<td>
1
</td> </tr>
<tr>
<td>
2
</td>
<td>
Policy assessment and monitoring
</td>
<td>
Kate Oldridge Turner
[email protected]_
</td>
<td>
8
</td> </tr>
<tr>
<td>
3
</td>
<td>
Obesity rates and energy balance related behavious
</td>
<td>
Arnfinn Helleve
[email protected]_
</td>
<td>
1
</td> </tr>
<tr>
<td>
4
</td>
<td>
Obesity system mapping
</td>
<td>
Cecile Knai
[email protected]_
</td>
<td>
5
</td> </tr>
<tr>
<td>
5
</td>
<td>
Youth Alliances for
Overweight Prevention
Policies
</td>
<td>
Christian Bröer
[email protected]_
</td>
<td>
2
</td> </tr>
<tr>
<td>
6
</td>
<td>
Dialogue forum with representatives from policy and business
</td>
<td>
Sudhvir Singh sudhvir [email protected]_
</td>
<td>
9
</td> </tr>
<tr>
<td>
7
</td>
<td>
Evaluation of CO-CREATEd
policy interventions and methodology
</td>
<td>
Nanna Lien
[email protected]_
</td>
<td>
3
</td> </tr>
<tr>
<td>
8
</td>
<td>
Open science and fair data management
</td>
<td>
Oddrun Samdal
[email protected]_
</td>
<td>
12
</td> </tr>
<tr>
<td>
9
</td>
<td>
Dissemination, exploitation and communication
</td>
<td>
Hannah Brinsden
[email protected]_
</td>
<td>
4
</td> </tr> </table>
10 Ethics requirements Isabelle Budin-Ljøsne 1
[email protected]_
## 1.5 Project Data Contacts
During the project, questions about the data or project should be addressed to
the Project Coordinator:
<table>
<tr>
<th>
Name
</th>
<th>
E-mail
</th>
<th>
Org. No.
</th> </tr>
<tr>
<td>
Knut-Inge Klepp
</td>
<td>
[email protected]_
</td>
<td>
1
</td> </tr>
<tr>
<td>
Therese Bakke
</td>
<td>
[email protected]_
</td>
<td>
1
</td> </tr> </table>
After the project: When data is delivered to the data repository, questions
related to the data should be addressed to the PI and the leader of WP8:
<table>
<tr>
<th>
Name
</th>
<th>
E-mail
</th>
<th>
Org. No.
</th> </tr>
<tr>
<td>
Knut-Inge Klepp
</td>
<td>
[email protected]_
</td>
<td>
1
</td> </tr>
<tr>
<td>
Oddrun Samdal
</td>
<td>
[email protected]
</td>
<td>
12
</td> </tr> </table>
## 1.6 Ownership of the material
As a general rule, results are owned by the party that generates them, with a
strong commitment to share the data for open use.
The physical activity framework and policy database that links to the healthy
diets framework and the policy database (NOURISHING), ref. WP2, D2.1 & D2.5,
shall be owned by World Cancer Research Fund International.
An interactive youth-focused website hosted by the World Obesity Federation,
including materials generated within the project, shall be owned by the World
Obesity Federation.
Further details in the CO-CREATE Consortium Agreement.
# 2\. Data Summary
## 2.1 Purpose of the data collection/generation and its relation to the
objectives of the project
By focusing on upstream factors and context change instead of individual
behaviour change, COCREATE will generate sustainable impacts that contribute
to narrowing inequalities in adolescent obesity. The project will i) evaluate
and also provide methodology of how to assess effectiveness of existing
policies on adolescent obesity, ii) provide knowledge base on what young
people, experts and policy makers find are important policy factors to address
when aiming to prevent adolescent obesity, and iii) provide a model that will
focus on how to involve young people and the range of relevant stakeholders by
explicitly politicizing the issue of obesity. This will be actualized by
providing specific obesity related policy proposals, and by designing and
testing advocacy tools and strategies for implementation and evaluation.
Figure 1 depicts the flow of data within the project and the transfer to the
public version of the data. All data will be collected and stored in a
safeguarded password protected system (see section 5 for details) to ensure
data protection. Safeguarded data include person details such as names and
contact details, but also indirectly identifiable data from notes. Only
anonymized data will be shared openly.
Figure 1: CO-CREATE Data management showing flow of data and data security
measures implemented as detailed in table
1 below and in accordance with the project Data Protection Agreement as
detailed in deliverable 10.3
## 2.2 The project objectives and how data collection/generation is connected
to them
1. Provide methodology and evaluating existing policies
To develop, test, and subsequently provide a valid, reliable, and easily
accessible and applicable methodology for monitoring and benchmarking policies
which directly or indirectly can influence energy balance related behaviours
(EBRB), no new data will be collected, but rather a benchmark of EBRB policies
will be generated based on existing data sources. Two sources of existing data
will be used; national policy documents addressing ERBR across the seven
European CO-CREATE countries, and data regarding individual level adolescent
ERBR and BMI measures. The individual level data will be re-used from the two
international surveys “Health Behaviour in School-aged Children. A WHO
Cross-national study” (HBSC) (www.hbsc.org) and the WHO European Childhood
Obesity Surveillance Initiative (COSI) ( _http://www.euro.who.int/en/health-
topics/diseaseprevention/nutrition/activities/who-european-childhood-obesity-
surveillance-initiative-cosi_ ), that both apply open access procedures to
their data. The benchmarking of each country’s policy will address its content
and effect on individual level behaviour in adolescents. The national policy
documents and the result of the benchmarking process will be made available on
the website of the World Cancer Research Fund International.
2. Provide knowledge base
The project will develop and deliver a set of visual system maps of policy-
dependent multi-level drivers of adolescent obesity across six countries
(Netherlands, Norway, Poland, Portugal, South
Africa, and the United Kingdom), and synthesize them into a single consensus
overview maps (WP4). For this purpose, both existing data from published
research as well a new data collected from mapping workshops with young people
and other stakeholders. Notes will be taken from the mapping workshops
involving discussions where the participants are asked to develop conceptual
maps of the drivers of positive energy balance through the food and physical
activity systems, informed by the existing evidence base. The conceptual maps
and the transcript of notes will be used to generate an overarching map
(system dynamic model) that depicts the key policy-amenable drivers of
adolescent obesity across Europe.
3. Develop obesity related policy proposals
In order to provide a model that will focus on how to involve young people and
the range of relevant stakeholders by explicitly politicizing the issue of
obesity the project will establish, make use of, and evaluate multi-actor
dialogue forums between public and private sector stakeholders (including
adolescents) that define and/or are influenced by obesity prevention policies
to work towards wider acceptance and support for effective obesity prevention
policies (WP6). The aim of the dialogue forums it to address and help refine
obesity prevention policies developed by youth, that policymakers and
businesses can respond to, based on engagement with youth to work towards
positioning youth as active agents of change and generating support for
effective obesity prevention policies. These policies and solutions will
accelerate the move from dialogue to implementation at a local, national and
regional level.
## 2.3 Types and formats of data that will be collected and generated by the
project
Three types of data will be collected in the CO-CREATE project: system maps,
interview data, and survey data. Based on the systems maps developed in the
mapping workshops and in combination with existing research evidence,
appropriate system modelling maps will be generated. Whilst building on
existing policy documents and existing survey data, benchmarking of national
policies will be generated. In section 2.4 the data protection measures
undertaken are presented in more detail.
System maps
System maps - in the form of causal loop diagrams – will be generated via
workshops, using a process called ‘group model building’ (GMB). Mapping
workshop participants will include adolescents from diverse socio-economic
backgrounds. The UK team will also host workshops for policy-makers and
academic experts working across Europe.
These system maps will represent the factors perceived by groups of
participants to affect the diets and physical activity of adolescents. In
addition to generating the maps, participants will discuss ways in which these
systems could be reshaped through policy actions in order to generate
healthier outcomes; this information will be captured via notes taken on paper
or digitally (laptop or tablet) but not attributed to any individual
participant. These notes will help in writing the reports, and in informing
the next steps (WP5 and WP6) but will not be otherwise qualitatively analysed.
Interview data
Following the youth alliance participation group and individual interviews
will take place in order to address the experience of the adolescents. These
interviews will be transcribed and coded.
Survey data
Adolescents participating in the youth alliances and other stakeholders
recruited to take part in the dialogue forums will be asked to fill in a
survey regarding their readiness for change and attitudes towards actions
preventing obesity. The youth alliance participants will be asked to fill in
the questionnaire prior to entering the alliance (baseline), and thereafter
regularly (monthly), with the final one three months after the stakeholder
forum. The survey to other stakeholders will be undertaken before, after and
three months after the dialogue forum. In order to describe the diversity of
the participants, the baseline questionnaire will also include data on date of
birth, gender, ethnicity, spoken language at home, their height and weight,
thoughts about their weight, socioeconomic status, physical activity habits,
and eating habits.
### 2.4 Data protection measures
Table 1 shows the data protection measures that will be undertaken for the
treatment of directly and indirectly identifiable personal data. Prior to
anonymization of data for the purpose of open access, all data will be
collected and stored in safeguarded password protected systems.
Table 1: Data protection measures in CO-CREATE during project period and post
project period, in accordance with roles of responsibility presented in figure
1.
<table>
<tr>
<th>
Description of personal data collected
</th>
<th>
Data protection measures implemented
</th> </tr>
<tr>
<td>
</td>
<td>
Delivery from data providers
</td>
<td>
Level/privacy status
</td>
<td>
Data protection measures
(responsible for fulfilment – see figure 1)
</td>
<td>
Data management
and risk minimising measures within the project period
</td>
<td>
Data
management
after project period
</td> </tr>
<tr>
<td>
WP-4
</td>
<td>
Participant name and contact information (e.g. phone number, email)
</td>
<td>
Person level/directly identifiable
</td>
<td>
Secure storage with access control
(1)
</td>
<td>
Delete in accordance with project procedure
</td>
<td>
N.A
</td> </tr>
<tr>
<td>
</td>
<td>
Systems maps
</td>
<td>
Group level/anonymi sed
</td>
<td>
N.A
</td>
<td>
In line with ICF and in accordance with
obligations in GA 774210 and publication procedures
</td>
<td>
Open access published results Provide as open access data
</td> </tr>
<tr>
<td>
WP-5
</td>
<td>
Participant name and contact information (e.g. phone number, email)
</td>
<td>
Person level/directly identifiable
</td>
<td>
Secure storage with access control
(1)
</td>
<td>
Delete in accordance with project procedure
</td>
<td>
N.A
</td> </tr>
<tr>
<td>
</td>
<td>
Notes from meetings
</td>
<td>
Group level (indirectly identifiable)
</td>
<td>
Secure storage with access control (1,2,3) Secure data delivery when
transferring
data (1,2,3)
</td>
<td>
In line with ICF and in accordance with
obligations in GA 774210 and publication procedures
</td>
<td>
Open access published results Secure fully anonymised data Provide as open
access data
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
Alliance policy forms
</th>
<th>
Group level indirectly identifiable)
</th>
<th>
Secure storage with access control (1,2,3)
Secure data delivery when transferring data (1,2,3)
</th>
<th>
In line with ICF and in accordance with
obligations in GA 774210 and publication procedures
</th>
<th>
Open access published results Secure fully anonymised data Provide as open
access data
</th> </tr>
<tr>
<td>
</td>
<td>
Vlogs
</td>
<td>
Person level/directly identifiable
</td>
<td>
Separate follow up as part of Youth
Alliances (1) Provided channels controlled by the project partners (ref.
D9.1 version
1.2)
</td>
<td>
In line with ICF and in accordance with
obligations in GA 774210 and publication
procedures
Separate follow up in information procedures
</td>
<td>
Public access through platforms chosen by vlogger
</td> </tr>
<tr>
<td>
WP-6
</td>
<td>
Participant name and contact information (e.g. phone number, email)
</td>
<td>
Person level/directly identifiable
</td>
<td>
Secure storage with access control
(1)
</td>
<td>
Delete in accordance with project procedure
</td>
<td>
N.A
</td> </tr>
<tr>
<td>
</td>
<td>
Notes from forums
</td>
<td>
Group level/ potentially indirectly identifiable
</td>
<td>
Secure storage with access control (1,2,3) Secure data delivery when
transferring
data (1,2,3)
</td>
<td>
In line with ICF and in accordance with
obligations in GA 774210 and publication
procedures
</td>
<td>
Open access published
results
</td> </tr>
<tr>
<td>
WP-7
</td>
<td>
Participant name and contact information (e.g. phone
</td>
<td>
Person level/directly identifiable
</td>
<td>
Secure storage with access control
(1)
</td>
<td>
Delete in accordance with project procedure
</td>
<td>
N.A
</td> </tr>
<tr>
<td>
</td>
<td>
number, email)
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
Questionnaire
</td>
<td>
Person level/ indirectly identifiable
</td>
<td>
Secure storage with access control
(1,2,3)
Personal identifiers replaced with code (1) Identifiable data and code key
stored separately with separate access control
(1)
Secure data delivery when transferring data (1,2,3)
</td>
<td>
In line with ICF and in accordance with
obligations in GA 774210 and publication procedures
</td>
<td>
Open access published results Secure fully anonymised data Provide as open
access data
</td> </tr> </table>
## 2.5 Re-use of existing data
Existing data from two international surveys will be used: “Health Behaviour
in School-aged Children.
A WHO Cross-national study” (HBSC) (www.hbsc.org) and the WHO European
Childhood Obesity Surveillance Initiative (COSI) (
_http://www.euro.who.int/en/health-
topics/diseaseprevention/nutrition/activities/who-european-childhood-obesity-
surveillance-initiative-cosi_ ). Both surveys apply open access procedures to
their data.
The HBSC study has collected data on adolescent health behaviours in several
countries since 1981.
Currently there are 45 countries from across Europe and North-America involved
with the study.
Survey data collection is conducted every four years among a nationally
representative sample of 11, 13 and 15-year-olds. For the current project,
variables on height, weight, body perception, eating behaviours and physical
activity will be used. The 10th cycle of data collection took place 2017/2018.
A total of 200.000 students participate each survey year.
The COSI study has since 2007 collected nationally representative data on
objectively measured weight and height measurements among 6-9-year-olds. In
addition, the study collects information on school characteristics of
relevance for healthy eating and physical activity, and information from
parents on their child’s eating behaviours and physical activity. The fifth
round of data collection took place during the 2018–2019 school year. A total
of 300.000 children participate during each cycle.
Both studies follow standardised protocols for sampling, data collection and
data cleaning, as well as for translation of study items across the
participating countries, allowing cross-country comparison.
## 2.6 Expected size of the data
2.6.1 Collected data from adolescents
Approximately 600 adolescents will participate in different dialogue
activities organised across WP4-
7\.
The mapping workshops aiming to develop the system maps will take place in the
Netherlands, Norway, Poland, Portugal, the United Kingdom, and South Africa.
In each country, four schools will be selected and invited to host one mapping
workshop each. It is expected that a total of 24 workshops will be conducted.
Each workshop will aim to gather 10 to 15 adolescents. Thus, it is estimated
that about 360 adolescents will participate in the workshops led by WP4, where
60 adolescents per country (15 adolescents x 4 schools = 60) would be
recruited in six countries (60 x 6 = 360). From the 24 workshops, a similar
number of system maps is expected as a data outcome. In addition to these
system maps. The system maps, in combination with notes from the workshops and
existing evidence will be used to generate systems dynamics models visualizing
the relationships between the variables to determine the direction of
influence that increases or decreases adolescent ERBR and
obesity, and to illustrate balancing and reinforcing feedback loops. The
system dynamic models represent new data that can be used by others for
further research.
For the youth alliances a variety of recruitment strategies will be used,
including schools, youth organisations and peers. The aim is to recruit 40
adolescents in each of the six countries, establishing three alliances in each
country with 10-15 adolescents in each. In addition to these country-based
alliances, 40 adolescent students at the International school will be
recruited in the Netherlands to specifically address the EU level strategy in
an International Youth Alliance. In total, CO-CREATE will recruit
approximately 240 adolescents from six countries for participation in the
Youth Alliances, focus group and individual interviews.
2.6.2 Collected data from other stakeholders
About 30 experts and policymakers attending two different conferences will
participate in group model building workshops. Similarly, policy makers,
business leaders, and other stakeholders will participate in dialogue forums.
The exact group size for these group model building workshops and dialogue
forums are under development.
2.6.3 Re-use of existing data
For each of the surveys in the COSI study 300.000 children participate, whilst
200.000 adolescents participate in each HBSC study. As a starting point, data
from all surveys in the COSI study will be used, as well as HBSC data from the
2001 surveys and onwards. From 2001, major changes were made to the HBSC study
questionnaire as well as the introduction of new EBRB measures, hence data
after these changes were put into place will be used. By including data from
the last five surveys of the two studies, data from a total of 1.500.000
children and 1.000.000 adolescents across Europe and North America are
available for analyses.
## 2.7 Data utility
All data collected in the CO-CREATE project will be described and documented
to facilitate use of the data through open access.
The principal document for data description will be the protocol developed for
the different data collections. The protocol outlines the theoretical
framework for the data to be collected, translation with back-translation
procedures to ensure comparability of data and concepts used, recruitment
procedures (described in deliverable D10.1), coding procedures for qualitative
data and computing guidelines for quantitative data. Thus, the protocol
standardizes procedures to be followed by all countries and for all activities
taking place in the project, and conclusively will be a vital document for
everyone wanting to use the data collected in the project.
Guidelines will be developed for information from each mapping workshop, and
interviews after participation in youth alliances and dialogue forums. These
files will be documented and stored safely (including names of files etc.).
Moreover, procedures for how countries should transcribe focus group
interviews and individual interviews will be provided.
For the quantitative survey data a codebook has been developed providing
variable names and labels, values and labels. The University of Oslo
(organisation no 3) is responsible for undertaking the survey data collection
for all involved partners. They will be using a safeguarded password protected
system called TSD for this purpose. Clear procedures on how to ensure that
confidentiality is secured during data collections processes (including
storage of codes connecting longitudinal data) are with this system in place,
as well as instructions on how to ensure anonymity of data submitted to the
data management centre, placed at the University of Bergen (organisation no
12).
Guidelines will be developed for how each country will need to document their
data and in which format the data should be delivered, and how each file
should be named. Documentation of the data relates to concrete description of
recruitment procedures undertaken (e.g. number of invitations sent, number of
responses received, final number of participants), any deviations from
protocol (e.g. recruitment, data collection (procedures, instruments), any
coding procedures used (e.g. for qualitative data).
Based on the documentation of the data and the data itself, each file will be
checked for consistency with protocol and guidelines. Agreed cleaning
procedures to secure comparability of the data across the participating
countries will be performed by the data management centre. National files that
have been cleaned will be merged into international datafiles both for the
qualitative and quantitative data for use in the project as well as by
external users.
# 3\. FAIR data
The project will adopt the FAIR principles of making data findable,
accessible, interoperable and re-usable.
## 3.1 Making data findable, including provisions for metadata
The data from the project will be made easily available on a data website. A
Digital Object Identifier (DOI) system will be developed and used when
publishing findings using the data for easy access to both check and further
use the data. Our metadata will be in the form of an Atlas.ti documents. Our
naming convention will follow a certain uniform agreement, such as: type of
data-country-pic-date taken-version. Example: fgd-poland-zofia-170819-v1. A
set of key words will be developed to identify the datafiles from the CO-
CREATE project.
Use of data is vital to identify any errors in the data. Additionally, the
data will provide basis for standardising derived variables. To document such
changes made to the international data files it is important to have clean and
concise version numbers of the data file, as well as documentation of changes
undertaken from one version to the next.
Metadata providing information on overall protocol for recruitment procedures,
sample, data collections guidelines and instruments, deviations from protocol,
and any coding of data undertaken will be presented. The metadata will be
provided along with the data.
## 3.2 Making data openly accessible
All data collected in the project will be anonymised, and will after cleaning
be shared through open access as soon as the data has undergone the cleaning
procedures and international quality assurance procedures. The quality
assurance process will need internal use of the data between 6-9 months to
allow identification of errors in data that have not been discovered through
the cleaning process, and that typically are better observed when analysing
the data. The data will be made available through an open access data
deposition repository. The decision on which open access data deposition
repository to use has not yet been made, but some relevant options have been
identified.
We are looking into two options for access, either accessible for direct
download or with a request for a name and email address to keep track of those
interested in the data and that data is not used for commercial purposes. If
the latter approach is chosen, we will ensure that storage of person
information is undertaken in alignment with the regulations for the General
Data Protection Regulation (GDPR).
The qualitative metadata, namely the qualitative coding and the analysis,
might be accessed through Atlas.ti in which different data will be organized
in different categories, such as data type, locality, participants and other
relevant emerging categories. Atlas.ti offers an XML exchange format for
future data stewardship and is part of the the QDA-XML Exchange Standard (REFI
on _http://www.qdasoftware.org_ ).
The quantitative data will be made available as SPSS files, but can also be
exported as excel files.
## 3.3 Making data interoperable
In the protocol for the CO-CREATE project we will identify approaches and
variables used in other studies, so that data can be connected across studies.
When appropriate, international standard coding systems will be used (ISO
coding), I.e. for country codes and socio-economic codes. The project will aim
to connect its data with other projects and make the CO-CREATE data available
for other projects to connect to.
## 3.4 Increase data re-use (through clarifying licences)
The data will be made available as soon as it has been checked for errors and
documented properly. There will be no licences required to access the data.
# 4\. Allocation of resources
The project has included resources for data management as part of its budget.
That involves resources to prepare the data for open access as well as to
manage the future maintenance of the data. The data management centre is
situated at the University of Bergen and will aim to maintain access to the
data also after the project period, or make sure to deposit the data to a
repository that will maintain access to the data.
# 5\. Data security
The University of Bergen (organisation no 12) has a strong fire wall security
and safety related to storage of data. A copy of all the original data will
therefore be kept safe and provide basis for data recovery should the online
platform fail.
The project will use a solution for secure processing of sensitive personal
data called SAFE. SAFE is based on “Norwegian Code of conduct for information
security in the health and care sector” (Normen) and ensures confidentiality,
integrity, and availability are preserved when processing sensitive personal
date. SAFE uses a 2-factor authentication, and each user will have to supply a
onetime code received on their phone. This process demands for each user to be
set up to use SAFE via the University of Bergen and WP8, ensuring no
unauthorized access is possible. All researchers requiring access to SAFE will
be guided by WP8, to ensure correct installation and usage. A similar data
collection and data storage system (TSD) has been developed by the University
of Oslo (organisation no 3), and this system will be used for safeguarded data
collection of the survey data.
Dats will be anynomised before transfer to the SAFE system at the University
of Bergen (see table 1).
# 6\. Ethical aspects
Emphasis will be given to secure confidentiality during data collection and
anonymity when data are shared. Informed consent is sought from all
participants with regard to the data and information collected in the project,
including approval to open access sharing of the data.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1384_E-LOBSTER_774392.md
|
# Research data
Generally speaking, research data refers to data collected, processed,
observed, or generated within a project with the aim to produce original
research results. Data are processed, organized, structured and interpreted in
order to determine their true meaning and becoming in this way valuable
information.
In the framework of research activities, data can be divided into different
categories, depending on their scope, their origin as well as their
processing. For instance, very often research data are classified in:
* Observational data
* Experimental data
* Simulation data
* Derived data
Actually, observational data refers to data captured in real-time (e.g. sensor
data, survey data, sample data etc.), experimental data refers to data
obtained from laboratory equipment (e.g data resulting from validation in the
field) whereas simulation data indicates data created from numerical models.
Derived data are those data generated from existing data.
In this context, research data may include different formats:
* Text/word documents, spreadsheets, questionnaire, transcripts
* Laboratory/Field notebooks, diaries, codebook
* Audios, videos, pictures, photographs
* Test responses
* Slides
* Artifacts, specimen, samples
* Collection of digital objects acquired and generated during the research process Data files
* Database contents
* Protocols, procedures
* Models, algorithms, software codes, scripts, log files, simulations
* Methodologies and workflows
* Standard operating procedures and protocols Etc.
According to the key principles for research data management and in particular
to the “ _Guidelines on FAIR Data Management in Horizon 2020_ ”, research data
1 must be _findable_ , _accessible_ , _interoperable_ , _re-usable_ . The
FAIR data guiding principles are reported in the following 2 :
1. To be **Findable** any Data Object should be uniquely and persistently identifiable
1. The same Data Object should be re-findable at any point in time, thus Data Objects should be **persistent** , with emphasis on their metadata
2. A Data Object should minimally contain basic machine actionable metadata that allows it to be distinguished from other Data Objects
3. Identifiers for any concept used in Data Objects should therefore be **Unique** and **Persistent**
2. Data is **Accessible** in that it can be always obtained by machines and humans
1. Upon appropriate authorization
2. Through a well-defined protocol
3. Thus, machines and humans alike will be able to judge the actual accessibilty of each Data Object.
3. Data Objects can be **Interoperable** only if:
1. (Meta) data is machine-actionable
2. (Meta) data formats utilize shared vocabularies and/or ontologies
3. (Meta) data within the Data Object should thus be both syntactically parseable and semantically machine-accessible
4. For Data Objects to be **Re-usable** additional criteria are:
1. Data Objects should be compliant with principles 1-3
2. (Meta) data should be sufficiently well-described and rich that it can be automatically (or with minimal human effort) linked or integrated, like-with-like, with other data sources 4.3 Published Data Objects should refer to their sources with rich enough metadata and provenance to enable proper citation
After having investigated the aforementioned aspects, it was decided that the
data management plan of E-LOBSTER will be based on some key elements 3 :
**Dataset reference and name:** An identifier has to be produced for each
dataset. In particular, the technical data related to simulations, validation
through laboratory tests, validation in the field (WP1, WP2, WP4, WP5) will be
referred by using a code composed on the date (year, month, day, hour e.g.
20181101_0845) and a name.
**Data set description:** The data that will be generated or collected during
E-LOBSTER will be described, as well as their origin (in case they are
collected). A readme.txt file will allow to have the basic information.
**Standards and metadata:** Reference to existing suitable standards of the
discipline. If these do not exist, an outline on how and what metadata will be
created has to be given.
**Data sharing** : Description of how data will be shared, including access
procedures, embargo periods (if any) will be provided. In case the dataset
cannot be shared, the reasons for this should be mentioned (e.g. IPR, personal
data, intellectual property, commercial, security-related).
**Archiving and preservation (including storage and backup):** Procedures that
will be put in place for long-term preservation of the data will be described.
Indication of how long the data should be preserved, what is its approximated
end volume.
# Open Access
## Measures to provide Open Access to peer-reviewed scientific publications
Fully in line with the Grant Agreement, each E-LOBSTER beneficiary will ensure
open access (free of charge online access for any user) to all peer-reviewed
scientific publications relating to its results.
In particular, it will:
1. as soon as possible and at the latest on publication, deposit a machine-readable electronic copy of the published version or final peer-reviewed manuscript accepted for publication in a repository for scientific publications;
Moreover, the E_LOBSTER beneficiary will aim to deposit at the same time the
research data needed to validate the results presented in the deposited
scientific publications.
2. ensure open access to the deposited publication — via the repository — at the latest: on publication, if an electronic version is available for free via the publisher, or within six months of publication in any other case.
3. ensure open access — via the repository — to the bibliographic metadata that identify the deposited publication.
The bibliographic metadata must be in a standard format and must include all
of the following: - European Union (EU)
* Horizon 2020
* Research and Innovation Action (RIA)
* E-LOBSTER
* Grant Agreement No 774392”.
* the publication date
* the length of embargo period if applicable
* Information about the persistent identifier
## Management of the Research Data generated and/or collected during the
project
The E-LOBSTER project recognizes the value of regulating research data
management issues. Accordingly, in line with the Grant Agreement, the
beneficiaries will, to the extent possible, deposit the research data needed
to validate the results presented in the deposited publications in a clear and
transparent manner. However, they have opted, after a careful have, not to
take part to the Pilot Action on Open Research Data. The reason for this is
that data that will underlie the project’s activities may be of sensitive
nature and their protection could have a relevant importance to guarantee the
commercial perspectives in particular of the industrial partners. These data
will be put at the disposal of relevant consortium partners as well as to the
member of the stakeholder groups after having signed a devoted Non-Disclosure-
Agreement (NDA). However, they will not be disclosed (with the exception of
some cases) in order to safeguard the legitimate interests of all involved
entities.
In the following chapters, the data management plan at project level as well
as the individual partner data management plan will be presented.
# Data management plan at project level
In this chapter, an overview of the data management plan at project level will
be presented.
In particular, in the table below for each E-LOBSTER WP, the following
information will be provided:
* Number and name of the task
* Description of the data collected/generated at task level
* Partner owner of the data
* Format
* Confidential level.
More accurate information will be presented in the next chapter, where the
data management plan at individual partner level will be illustrated.
**Table 1: Overview of E-LOBSTER Data management plan at project level**
<table>
<tr>
<th>
**WP1 Analysis of energy losses for fully integrated power distribution and
transport networks**
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
**Task**
</td>
<td>
**Description**
</td>
<td>
**Data**
</td>
<td>
**Owner(s)**
</td>
<td>
**Format**
</td>
<td>
**Confidential**
</td> </tr>
<tr>
<td>
T1.1:
</td>
<td>
Analysis of the energy losses within the traction chain and identification of
measures for energy losses prevention
</td>
<td>
-Software code of the bespoke simulator for Energy losses evaluation
-Dataset related to Metro of Madrid to be used for the analysis (MDM). -Dataset related to the outcomes of simulations on power losses with various feeding arrangement based on data of metro of Madrid
-Different datasets related to simulations
for the validation of the tool for the railway network
\- Modelling of the railway traction system
</td>
<td>
MDM, UOB
</td>
<td>
.xls, .dat, .mat,
jpeg, .txt., .pdf, .docx, .ppt
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
T1.2
</td>
<td>
Analysis of the energy losses within the power distribution grid and
identification of measures for energy losses prevention
</td>
<td>
-Dataset related to parameters monitoring of the power distribution network
-Dataset related to Metro of Madrid to be used for the analysis (MDM).
-Modelling and simulation of the power distribution networks and components -Software code of the smart Grid simulator
</td>
<td>
UNEW, RINA-C,
MDM
</td>
<td>
.xls, .dat, .mat,
jpeg, .txt., .pdf, .docx, .ppt
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
T1.3
</td>
<td>
Design of an advanced simulator for rail electrification systems fully
integrated with the power distribution system via a SOP and a DC bus
connection
</td>
<td>
-Software code of the advanced simulator
-Dataset related to simulator
-Combination and Modelling of both power distribution networks and railway networks
\- Control strategy of the sSOP
</td>
<td>
MDM, UOB, UNEW, TPS
</td>
<td>
.xls, .dat, .mat,
jpeg, .txt., .docx, .pdf, .ppt
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
T1.4
</td>
<td>
Impact of consumer behaviour on network losses
</td>
<td>
\- Data about analysis of consumer behaviour and tools for reduction of energy
consumption
</td>
<td>
FFE, RINA-C, UOB,
UNEW
</td>
<td>
.pdf, .docx
</td>
<td>
No
</td> </tr> </table>
Page
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
-Data related to the analysis of market, socioeconomic, legal and institutional framework situations
</th>
<th>
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**WP2 - Enabling technologies**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Task**
</td>
<td>
**Description**
</td>
<td>
**Data**
</td>
<td>
**Owner(s)**
</td>
<td>
**Format**
</td>
<td>
**Confidential**
</td> </tr>
<tr>
<td>
T2.1
</td>
<td>
Electrified Transport and distribution network key elements and main
specification towards reduction of losses
</td>
<td>
\- Data related to the analysis of the current situations in terms of energy
losses in the railway and power distribution networks,
state of the art of most suitable technologies for the losses reduction
(Handbook)
</td>
<td>
UOB, RINA-C, TPS,
FFE,SPD,LIBAL,
RSSB
</td>
<td>
.pdf, .docx
</td>
<td>
No
</td> </tr>
<tr>
<td>
T2.2
</td>
<td>
Smart SOP (Soft Open
Point) Power Electronics Specifications and interaction with smart metering
</td>
<td>
-Software code based on PLECS for the simulations of sSOP performance -Data related to the simulations of the sSOP
-Specifications of the sSOP
</td>
<td>
TPS
</td>
<td>
.PLECS, .pdf, .dat, .mat, .docx
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
T2.3
</td>
<td>
Identification, design and development of the most suitable Electrical Storage
for the mutual benefit interexchange of electricity
</td>
<td>
-Custom made software code based on C# language for the simulations of the storage
system performance
-Data related to the simulations of the storage system
-Specifications of the storage system
</td>
<td>
LIBAL
</td>
<td>
C#, .dat, .mat, .pdf, .docx, .txt
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
T2.4
</td>
<td>
Design and development of
Power Electronics prototypes
</td>
<td>
\- Schema and design of the power electronic prototypes
</td>
<td>
TPS, LIBAL
</td>
<td>
.pdf, .docx, .vsd
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**WP3 Policy, regulation and standards**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Task**
</td>
<td>
**Description**
</td>
<td>
**Data**
</td>
<td>
**Owner(s)**
</td>
<td>
**Format**
</td>
<td>
**Confidential**
</td> </tr>
<tr>
<td>
T3.1
</td>
<td>
Identification of standard operating parameters and definitions towards future
marketability
</td>
<td>
-Repository gathering information on existing standards and their classification
</td>
<td>
RSSB, FFE, MDM, RINA-C, UITP
</td>
<td>
.pdf, .docx, .xls
</td>
<td>
No
</td> </tr> </table>
Page
<table>
<tr>
<th>
T3.2
</th>
<th>
Policies for the support of the marketability of ELOBSTER Solution
</th>
<th>
-Repository gathering information on existing policies and their classification - Data related to the three stakeholder workshops with the 2 Stakeholders Groups (DSOs and Transport manager) in close collaboration with WP6 (results of questionnaires, surveys, inputs etc).
</th>
<th>
RINA-C, RSSB, FFE, UITP
</th>
<th>
.pdf, .docx, .xls, .ppt, online survey
</th>
<th>
No
</th> </tr>
<tr>
<td>
T3.3
</td>
<td>
E-LOBSTER Compliancy to Energy and Transport
regulation
</td>
<td>
\- Data related to the analysis of the compliancy with current regulations
</td>
<td>
RSSB, LIBAL, FFE
</td>
<td>
.pdf, .docx, .xls, .ppt,
</td>
<td>
No
</td> </tr>
<tr>
<td>
T3.4
</td>
<td>
Standardization procedures towards E-LOBSTER marketability
</td>
<td>
* Data related to existing standards
* Data related to gap to be covered
</td>
<td>
FFE, RSSB
</td>
<td>
.pdf, .docx, .xls, .ppt,
</td>
<td>
No
</td> </tr>
<tr>
<td>
T3.5
</td>
<td>
Final proposals to unlock policies, standards and regulatory bottlenecks
</td>
<td>
-Proposals for future standards
</td>
<td>
RSSB, LIBAL, RINA-C
</td>
<td>
.pdf, .docx, .ppt,
</td>
<td>
No
</td> </tr>
<tr>
<td>
T3.5
</td>
<td>
Best practices for the cyber security of transports and distribution network
smart management systems
</td>
<td>
\- Guidelines and check list for cybersecurity of transports and distribution
network smart management systems
</td>
<td>
RINA-C, RSSB, FFE, MDM)
</td>
<td>
.pdf, .docx, .xls, .ppt,
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
**WP4 S** ystem integration **– Measures and technologies for collaborative
rail and electricity networks for losses reduction and mutual benefit
interaction**
</td> </tr>
<tr>
<td>
**Task**
</td>
<td>
**Description**
</td>
<td>
**Data**
</td>
<td>
**Owner(s)**
</td>
<td>
**Format**
</td>
<td>
**Confidential**
</td> </tr>
<tr>
<td>
T4.1
</td>
<td>
Identification of correlations and all possible shared degrees-offreedom
between Railway Energy Network
and Distribution Electricity
Network
</td>
<td>
-Data related to the analysis of the global electric framework including constraints and degrees-of-freedom of both the railway electric grid (i.e. train scheduling, overtension related to the start/stop of trains etc.) and the distribution network electric grid (i.e. quality of the power, frequency level, traditional daily supply scheduling etc… - ).
</td>
<td>
UOB, RINA-C,
UNEW, UOB,
MDM, RSSB, FFE
</td>
<td>
.pdf, .docx, .xls, .ppt,
</td>
<td>
No
</td> </tr> </table>
<table>
<tr>
<th>
T4.2
</th>
<th>
Reference Architecture for Operational R+G (Railwayto-grid) Management
System
</th>
<th>
* Data and characterization of the different interfaces
* Data related to Functional analysis - Schemas of Physical System Architecture and Interface Architecture
* Data related to behavioral Architecture, concerning the system usability, applications and use cases
</th>
<th>
UNEW, RINAC, UOB, MDM, TPS
</th>
<th>
.pdf, .docx, .xls, .ppt,
</th>
<th>
Yes
</th> </tr>
<tr>
<td>
T4.3
</td>
<td>
Operational R+G
Management System
</td>
<td>
* Source code for the R+G Management system
-Data related to the testing of the R+G
Management system
-Design of the R+G Management system
* Algorithm related to the R+G
Management system
-Data related to Testing of R+G
Management system
</td>
<td>
RINA-C, UNEW,
UOB, MDM, TPS,
LIBAL
</td>
<td>
.pdf, .docx, .xls, .ppt, .dat, .mat Format of the source code for the R+G to
be still defined after the analysis of interfaces
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
**WP5 - Demonstration and Validation of the E-LOBSTER**
</td> </tr>
<tr>
<td>
**Task**
</td>
<td>
**Description**
</td>
<td>
**Data**
</td>
<td>
**Owner(s)**
</td>
<td>
**Format**
</td>
<td>
**Confidential**
</td> </tr>
<tr>
<td>
T5.1
</td>
<td>
E-LOBSTER R+G
Management real-time simulation and validation in Hardware-in-the Loop
</td>
<td>
\- Dataset referred to the real-time testing in the in the Smart Grid UNEW
Laboratory with physical/emulation platform
</td>
<td>
UNEW, MDM,
TPS, UOB, FFE,
LIBAL
</td>
<td>
.xls, .dat, .mat, jpeg, .txt., .docx, .pdf
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
T5.2
</td>
<td>
E-LOBSTER R+G
Management and power electronics laboratory validation and testing at
the Smart Grid Laboratory
</td>
<td>
-Dataset referred to the laboratory Validation in the Smart Grid UNEW Laboratory of the R+G Management and power electronics devices
</td>
<td>
UNEW, MDM,
TPS, UOB, FFE,
LIBAL
</td>
<td>
.xls, .dat, .mat, jpeg, .txt., .docx, .pdf
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
T5.3
</td>
<td>
Demonstration and field testing of the R+G
Management
</td>
<td>
-Dataset referred to the Validation in the
field in the Metro of Madrid of the R+G Management System and power
</td>
<td>
MDM, RINA-C, FFE, TPS, LIBAL
</td>
<td>
.xls, .dat, .mat,
jpeg, .txt., .docx, .pdf, .ppt
</td>
<td>
Yes, with the except of Guidelines
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
electronics devices as well as the overall ELOBSTER concept
-Guidelines and Best practices and measures for the installation of the selected technologies at pilot site
</td>
<td>
</td>
<td>
</td>
<td>
that are not confidential
</td> </tr>
<tr>
<td>
T5.4
</td>
<td>
Step by Step Monitoring
Process from TRL4 to TRL 6
</td>
<td>
\- Data related to the monitoring of KPI
</td>
<td>
UOB, UNEW, FFE, MDM, RINA-C
</td>
<td>
.xls, docx, .pdf
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**WP6 Dissemination and Route for Replication**
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Task**
</td>
<td>
**Description**
</td>
<td>
**Data**
</td>
<td>
**Owner(s)**
</td>
<td>
**Format**
</td>
<td>
**Confidential**
</td> </tr>
<tr>
<td>
T6.1
</td>
<td>
Scale up of the E-LOBSTER concept and preliminary replication design
</td>
<td>
* Data for replicability studies
* Data related to cost benefit analysis for assessing replicability
</td>
<td>
RINA-C, FFE, TPS,
LIBAL, MDM
</td>
<td>
.xls, docx, .pdf
</td>
<td>
No
</td> </tr>
<tr>
<td>
T6.2
</td>
<td>
Business Model and
Roadmap Towards TRL 9
</td>
<td>
\- Data related to potential Business models and roadmaps
</td>
<td>
All partners
</td>
<td>
.xls, docx, .pdf, .ppt
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
T6.3
</td>
<td>
Dissemination Activities
</td>
<td>
\- Dissemination materials in different forms
(video, lefleat, website etc.)
</td>
<td>
All
</td>
<td>
.mp4, .jpeg, .pdf, .docx
</td>
<td>
No
</td> </tr>
<tr>
<td>
T6.4
</td>
<td>
Stakeholders engagement for future marketability of
E-LOBSTER concept
</td>
<td>
\- Data related to the three stakeholder workshops with the 2 Stakeholders
Groups (DSOs and Transport manager in close collaboration with WP3 (results of
questionnaires, surveys, inputs etc).
</td>
<td>
RINA-C, RSSB, FFE, UITP
</td>
<td>
.pdf, .docx, .xls, .ppt, online survey
</td>
<td>
No
</td> </tr>
<tr>
<td>
T6.5
</td>
<td>
Exploitation and IPR management
</td>
<td>
* Guidelines for IPR strategy
* Market data with respect to the different exploitable results - Plans for exploitation
</td>
<td>
All partners
</td>
<td>
.docx, .pdf
</td>
<td>
Yes
</td> </tr> </table>
# Individual partner data management plan
In this chapter, the E-LOBSTER individual partner data management plan is
presented. In particular, each partner was requested to fill the table
illustrated in Figure 1, according to the following instructions:
* **NAME:** name data/metadata/dataset;
* **DESCRIPTION:** brief description of data/metadata/dataset;
* **CREATED:** each partner should indicate if data/metadata/dataset was (or will be) created during the project (Yes/No);
* **GATHERED:** each partner should indicate if data/metadata/dataset was (or will be) collected from other sources (Yes/No);
* **TYPE:** each partner should indicate the type of data/metadata/dataset selecting some of the following options (more than option is possible): Document, Video, Images, Source code/Software, Algorithm, Raw Data, Dissemination material, etc;
* **FORMAT:** each partner should indicate the file extension of data/metadata/dataset (.pdf, .xls, .mat, specific customized format etc ) and if a description of the data is available for its use
* **SIZE:** each partner should indicate the file extension the file dimension of data/metadata/dataset (order of magnitude: KB, MB or GB);
* **OWNER** : the lead beneficiary of the specific data/metadata/dataset (or “external” if the owner is not part of E-LOBSTER consortium) has to be indicated;
* **DISSEMINATION LEVEL:** each partner should indicate the dissemination level of the specific data/metadata/dataset collected or created during the project, by selecting one of the followings: Confidential, Public, Consortium, etc;
* **REPOSITORY DURING THE PROJECT (FOR PRIVATE/PUBLIC ACCESS):** each partner should indicate the location of data/metadata/dataset collected or created during the project, by selecting among E-LOBSTER file repository (Nextcloud), open access repositories, partner repository (private cloud/ drop box/ internal area), etc;
* **BACK-UP FREQUENCY** : it refers to the frequency of updating data/metadata/dataset collected or created during the project (daily, monthly, yearly etc.)
* **REPOSITORY AFTER THE PROJECT:** the location of data/metadata/dataset collected or created during the project after its conclusion, by selecting among E-LOBSTER file repository (Nextcloud), open access repositories, partner repository (private cloud/ drop box/ internal area), etc;
* **PRESERVATION AFTER THE END OF THE PROJECT (IN YEARS** ): if data/metadata/dataset collected or created during the project will be maintained, each partner must define for how many years they will be available.
**Figure 1: Template for individual partner data management plan**
E-LOBSTER – D7.2 Data Management Plan Page 13 of 28
## RINA-C DATA MANAGEMENT PLAN
<table>
<tr>
<th>
**NAME**
</th>
<th>
**DESCRIPTION**
</th>
<th>
**CREATED**
</th>
<th>
**GATHERED**
</th>
<th>
**OWNER**
</th>
<th>
**TYPE**
</th>
<th>
**FORMAT**
</th>
<th>
**SIZE**
</th>
<th>
**DISSEMINATION LEVEL**
</th>
<th>
**REPOSITORY DURING THE**
**PROJECT (FOR**
**PRIVATE/PUBLIC**
**ACCESS)**
</th>
<th>
**BACK-UP FREQUENCY**
</th>
<th>
**REPOSITORY**
**AFTER THE**
**PROJECT**
</th>
<th>
**PRESERVATION**
**AFTER THE**
**END OF THE**
**PROJECT (IN**
**YEARS)**
</th> </tr>
<tr>
<td>
Dataset - Monitoring data on
power
distribution network
</td>
<td>
This dataset includes monitoring data related to parameters of the power
distribution network received from the local DSO
</td>
<td>
No
</td>
<td>
YES
</td>
<td>
RINA-C/ local DSO
</td>
<td>
Raw data
</td>
<td>
.dat, .xls
</td>
<td>
MB
</td>
<td>
Confidential
</td>
<td>
E-LOBSTER repository (Nextcloud), Internal RINA-C repository
</td>
<td>
weekly
</td>
<td>
E-LOBSTER repository (Nextcloud), Internal RINA-C repository
</td>
<td>
5 years
</td> </tr>
<tr>
<td>
Dataset – Impact on consumers
</td>
<td>
Data including analysis of market, socioeconomic, legal and
institutional framework situations and trends
</td>
<td>
NO
</td>
<td>
YES
</td>
<td>
RINA-C /FFE
</td>
<td>
Data,
Documents
and figures
</td>
<td>
.xls
.pdf
.jpeg
</td>
<td>
MB
</td>
<td>
Public
</td>
<td>
E-LOBSTER repository (Nextcloud), Internal RINA-C repository
</td>
<td>
weekly
</td>
<td>
E-LOBSTER repository (Nextcloud), Internal RINA-C repository
</td>
<td>
5 years
</td> </tr>
<tr>
<td>
Data on energy losses in the railway and power
distribution networks
</td>
<td>
Data related to the analysis of the current situations in terms of energy
losses in the railway and power
distribution networks (state of the art)
</td>
<td>
NO
</td>
<td>
YES
</td>
<td>
RINA-C
</td>
<td>
Data,
Documents
and figures
</td>
<td>
.xls
.pdf
.jpeg
</td>
<td>
MB
</td>
<td>
Public
</td>
<td>
E-LOBSTER repository (Nextcloud), Internal RINA-C repository
</td>
<td>
weekly
</td>
<td>
E-LOBSTER repository (Nextcloud), Internal RINA-C repository
</td>
<td>
5 years
</td> </tr>
<tr>
<td>
R+ G
Management
System
</td>
<td>
Dataset including all the simulations carried out for the validation of the
railway simulator tool
</td>
<td>
Yes
</td>
<td>
No
</td>
<td>
RINA-C
</td>
<td>
Source code/ Software
</td>
<td>
to be defined after the definition of the different interfaces
</td>
<td>
MB
</td>
<td>
Confidential
</td>
<td>
RINA private internal repository
</td>
<td>
daily
</td>
<td>
RINA private internal repository
</td>
<td>
Permanent
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
(WP1)
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
Dataset – Validation of
R+G Data
Management
System
</td>
<td>
Dataset related to testing of
the R+G Data
Management
System
</td>
<td>
Yes
</td>
<td>
No
</td>
<td>
RINA-C
</td>
<td>
Raw data and Documents
</td>
<td>
to be defined
</td>
<td>
MB
</td>
<td>
Confidential
</td>
<td>
E-LOBSTER repository (Nextcloud), Internal RINA-C repository
</td>
<td>
weekly
</td>
<td>
E-LOBSTER repository (Nextcloud), Internal RINA-C repository
</td>
<td>
5 years
</td> </tr>
<tr>
<td>
First
stakeholder workshop - Transport
Manager
Stakeholder
Group (SG)
</td>
<td>
Data related to the First stakeholder workshop:
questionnaires, surveys, video, audio, documentation, dissemination material,
minutes of workshop.
</td>
<td>
Yes
</td>
<td>
No
</td>
<td>
RINA- C,
UITP, RSSB
</td>
<td>
Documents, video, audio, data, pictures
</td>
<td>
.pdf
</td>
<td>
GB
</td>
<td>
Consortium
</td>
<td>
E-LOBSTER repository (Nextcloud), Internal partner repository
</td>
<td>
weekly
</td>
<td>
E-LOBSTER repository (Nextcloud), Internal partner repository
</td>
<td>
5 years
</td> </tr>
<tr>
<td>
Second stakeholder workshop - DSO
Stakeholder
Group (SG)
</td>
<td>
Data related to the second stakeholder workshop:
questionnaires, surveys, video, audio, documentation, dissemination material,
minutes of workshop.
</td>
<td>
Yes
</td>
<td>
No
</td>
<td>
RINA- C,
UITP, RSSB
</td>
<td>
Document
</td>
<td>
.pdf
</td>
<td>
GB
</td>
<td>
Consortium
</td>
<td>
E-LOBSTER file repository (Nextcloud), Internal partner repository
</td>
<td>
weekly
</td>
<td>
E-LOBSTER file repository (Nextcloud), Internal partner repository
</td>
<td>
5 years
</td> </tr>
<tr>
<td>
Joint
Stakeholder
Workshop – TM and DSO
Stakeholder
Group (SG)
</td>
<td>
Data related to the Final stakeholder workshop:
questionnaires, surveys, video, audio, documentation, dissemination material,
minutes of workshop.
</td>
<td>
Yes
</td>
<td>
No
</td>
<td>
RINA- C,
UITP, RSSB
</td>
<td>
Document
</td>
<td>
.pdf
</td>
<td>
GB
</td>
<td>
Consortium
</td>
<td>
E-LOBSTER file repository (Nextcloud), Internal partner repository
</td>
<td>
weekly
</td>
<td>
E-LOBSTER file repository (Nextcloud), Internal partner repository
</td>
<td>
5 years
</td> </tr>
<tr>
<td>
Dataset Standards
</td>
<td>
Data related to the analysis of current standards
</td>
<td>
No
</td>
<td>
Yes
</td>
<td>
Consortium
</td>
<td>
Document
</td>
<td>
.pdf
</td>
<td>
MB
</td>
<td>
Consortium
</td>
<td>
E-LOBSTER file repository (Nextcloud), Internal partner repository
</td>
<td>
weekly
</td>
<td>
E-LOBSTER file repository (Nextcloud), Internal partner repository
</td>
<td>
5 years
</td> </tr>
<tr>
<td>
Dataset – Cost Benefit
Analysis
</td>
<td>
Data related to the Cost
Benefit Analysis for the replication of the E-LOBSTER concept
</td>
<td>
Yes
</td>
<td>
YES
</td>
<td>
RINA-C
</td>
<td>
Document, data
</td>
<td>
.pdf, .xls
</td>
<td>
MB
</td>
<td>
Public
(document)
Confidential
(raw data)
</td>
<td>
E-LOBSTER file repository (Nextcloud), Internal partner repository
</td>
<td>
weekly
</td>
<td>
E-LOBSTER file repository (Nextcloud), Internal partner repository
</td>
<td>
5 years
</td> </tr> </table>
## TPS DATA MANAGEMENT PLAN
<table>
<tr>
<th>
**NAME**
</th>
<th>
**DESCRIPTION**
</th>
<th>
**CREATED**
</th>
<th>
**GATHERED**
</th>
<th>
**OWNER**
</th>
<th>
**TYPE**
</th>
<th>
**FORMAT**
</th>
<th>
**SIZE**
</th>
<th>
**DISSEMINATION LEVEL**
</th>
<th>
**REPOSITORY DURING THE**
**PROJECT (FOR**
**PRIVATE/PUBLIC**
**ACCESS)**
</th>
<th>
**BACK-UP FREQUENCY**
</th>
<th>
**REPOSITORY**
**AFTER THE**
**PROJECT**
</th>
<th>
**PRESERVATION**
**AFTER THE END**
**OF THE PROJECT**
**(IN YEARS)**
</th> </tr>
<tr>
<td>
Simulators
</td>
<td>
Simulator for testing sSOP
</td>
<td>
Yes
</td>
<td>
NO
</td>
<td>
TPS
</td>
<td>
Source code/Software
</td>
<td>
.PLECS
</td>
<td>
MB
</td>
<td>
Confidential
</td>
<td>
Partner repository
</td>
<td>
weekly
</td>
<td>
Partner repository
</td>
<td>
Permanent
</td> </tr>
<tr>
<td>
Dataset on testing
</td>
<td>
Outcomes of the testing activities through simulator
</td>
<td>
Yes
</td>
<td>
No
</td>
<td>
TPS
</td>
<td>
Raw data and documents
</td>
<td>
.dat .pdf
.xlsx
</td>
<td>
MB
</td>
<td>
Raw data confidential
</td>
<td>
Partner repository
(raw data),
Project Repository
Nextcloud
(documents)
</td>
<td>
weekly
</td>
<td>
Partner repository (raw data),
Project
Repository
Nextcloud
(documents)
</td>
<td>
At least 5 years
</td> </tr>
<tr>
<td>
sSOP Design
</td>
<td>
schema and design of sSOP
</td>
<td>
Yes
</td>
<td>
No
</td>
<td>
TPS
</td>
<td>
Documents
and figures
</td>
<td>
.jpg
.pdf
.vsd
</td>
<td>
MB
</td>
<td>
Confidential
</td>
<td>
Partner repository
</td>
<td>
weekly
</td>
<td>
Partner repository
</td>
<td>
Permanent
</td> </tr>
<tr>
<td>
Dataset validation in UNEW Laboratory
</td>
<td>
Outcomes of the validation activities
during the test in UNEW Smart Grid
</td>
<td>
Yes
</td>
<td>
No
</td>
<td>
TPS
</td>
<td>
Raw data and documents
</td>
<td>
.dat .pdf
.doc
</td>
<td>
MB
</td>
<td>
Raw data confidential
</td>
<td>
Partner repository
(raw data),
Project Repository
Nextcloud
(documents)
</td>
<td>
weekly
</td>
<td>
Partner repository (raw data),
Project
Repository
Nextcloud
</td>
<td>
At least 5 years
</td> </tr>
<tr>
<td>
</td>
<td>
Laboratory
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
(documents)
</td>
<td>
</td> </tr>
<tr>
<td>
Dataset validation in Metro of Madrid
</td>
<td>
Outcomes of the validation activities in the field in Metro of Madrid
</td>
<td>
Yes
</td>
<td>
No
</td>
<td>
TPS
</td>
<td>
Raw data and documents
</td>
<td>
.dat
.pdf
</td>
<td>
MB
</td>
<td>
Raw data confidential
</td>
<td>
Partner repository
(raw data),
Project Repository
Nextcloud
(documents)
</td>
<td>
weekly
</td>
<td>
Partner repository (raw data),
Project
Repository
Nextcloud
(documents)
</td>
<td>
At least 5 years
</td> </tr> </table>
## RSSB DATA MANAGEMENT PLAN
<table>
<tr>
<th>
**NAME**
</th>
<th>
**DESCRIPTION**
</th>
<th>
**CREATED**
</th>
<th>
**GATHERED**
</th>
<th>
**OWNER**
</th>
<th>
**TYPE**
</th>
<th>
**FORMAT**
</th>
<th>
**SIZE**
</th>
<th>
**DISSEMINATION LEVEL**
</th>
<th>
**REPOSITORY DURING THE**
**PROJECT (FOR**
**PRIVATE/**
**PUBLIC ACCESS)**
</th>
<th>
**BACK-UP FREQUENCY**
</th>
<th>
**REPOSITORY**
**AFTER THE**
**PROJECT**
</th>
<th>
**PRESERVATION**
**AFTER THE END**
**OF THE**
**PROJECT (IN**
**YEARS)**
</th> </tr>
<tr>
<td>
First
stakeholder workshop
(Transport
Manager SG)
</td>
<td>
Data related to the First stakeholder workshop: questionnaire s, surveys,
documents
</td>
<td>
Yes
</td>
<td>
No
</td>
<td>
Consortium
</td>
<td>
Document
</td>
<td>
.pdf
</td>
<td>
MB
</td>
<td>
Consortium
</td>
<td>
E-LOBSTER file repository (Nextcloud), Internal partner repository
</td>
<td>
weekly
</td>
<td>
E-LOBSTER
file repository
(Nextcloud), Internal partner repository
</td>
<td>
5 years
</td> </tr>
<tr>
<td>
Second stakeholder workshop (DSO
SG)
</td>
<td>
Data related to the Second stakeholder workshop: questionnaire s, surveys,
documentats
</td>
<td>
Yes
</td>
<td>
No
</td>
<td>
Consortium
</td>
<td>
Document
</td>
<td>
.pdf
</td>
<td>
MB
</td>
<td>
Consortium
</td>
<td>
E-LOBSTER file repository (Nextcloud), Internal partner repository
</td>
<td>
weekly
</td>
<td>
E-LOBSTER
file repository
(Nextcloud), Internal partner repository
</td>
<td>
5 years
</td> </tr>
<tr>
<td>
Joint
Stakeholder Workshop with both SG
</td>
<td>
Data related to the Second stakeholder workshop: questionnaire s, surveys,
documentats
</td>
<td>
Yes
</td>
<td>
No
</td>
<td>
Consortium
</td>
<td>
Document
</td>
<td>
.pdf
</td>
<td>
MB
</td>
<td>
Consortium
</td>
<td>
E-LOBSTER file repository (Nextcloud), Internal partner repository
</td>
<td>
weekly
</td>
<td>
E-LOBSTER
file repository
(Nextcloud), Internal partner repository
</td>
<td>
5 years
</td> </tr>
<tr>
<td>
Dataset Standards
</td>
<td>
Data related to the analysis of current standards
</td>
<td>
No
</td>
<td>
Yes
</td>
<td>
Consortium
</td>
<td>
Document
</td>
<td>
.pdf
</td>
<td>
MB
</td>
<td>
Consortium
</td>
<td>
E-LOBSTER file repository (Nextcloud), Internal partner repository
</td>
<td>
weekly
</td>
<td>
E-LOBSTER
file repository
(Nextcloud), Internal partner repository
</td>
<td>
5 years
</td> </tr> </table>
## UOB DATA MANAGEMENT PLAN
<table>
<tr>
<th>
NAME
</th>
<th>
DESCRIPTION
</th>
<th>
CREATED
</th>
<th>
GATHERED
</th>
<th>
OWNER
</th>
<th>
TYPE
</th>
<th>
FORMAT
</th>
<th>
SIZE
</th>
<th>
DISSEMINATION LEVEL
</th>
<th>
REPOSITORY
DURING THE
PROJECT (FOR
PRIVATE/PUBLIC
ACCESS)
</th>
<th>
BACK-UP
FREQUENCY
</th>
<th>
REPOSITORY AFTER THE PROJECT
</th>
<th>
PRESERVATION
AFTER THE END OF
THE PROJECT (IN
YEARS)
</th> </tr>
<tr>
<td>
Single and multi-train simulators
</td>
<td>
Bespoke simulator for Energy losses evaluation
(WP1)
</td>
<td>
Yes
</td>
<td>
NO
</td>
<td>
UOB
</td>
<td>
Source code/ Software
</td>
<td>
.mat
</td>
<td>
MB
</td>
<td>
Confidential
</td>
<td>
Partner repository
</td>
<td>
monthly
</td>
<td>
Partner repository
</td>
<td>
Permanent
</td> </tr>
<tr>
<td>
Madrid
Metro Line 2 – train, route and power network data
</td>
<td>
These data is used for the simulation to study the energy losses (WP1, WP5)
</td>
<td>
Yes
</td>
<td>
No
</td>
<td>
Madrid
Metro
Line 2
</td>
<td>
Raw data Documen
ts and figures
</td>
<td>
.xls .pdf
</td>
<td>
1160
KB
</td>
<td>
Raw data confidential
</td>
<td>
Partner repository Nextcloud project repository
</td>
<td>
monthly
</td>
<td>
Partner repository Nextcloud project repository
</td>
<td>
At least 5 years
</td> </tr>
<tr>
<td>
Energy losses evaluation data
</td>
<td>
These are the simulation data based on Madrid Metro Line 2. The data describe
the power losses with various feeding arrangement
(WP1)
</td>
<td>
Yes
</td>
<td>
No
</td>
<td>
UOB
</td>
<td>
Raw data Documen
ts and figures
</td>
<td>
.dat
.mat
.pdf
</td>
<td>
MB
</td>
<td>
Raw data
confidential,
Summing up of results public
</td>
<td>
Partner repository (raw data). Nextcloud project repository
(document/prese ntation summing
up the simulations)
</td>
<td>
monthly
</td>
<td>
Partner repository (raw data). Nextcloud project repository
(document/present ation summing up the simulations)
</td>
<td>
At least 5 years
</td> </tr>
<tr>
<td>
Dataset for Single and multi-train simulator tool assessm.
</td>
<td>
Dataset including all the simulations carried out for the validation of the
railway simulator tool
(WP1)
</td>
<td>
Yes
</td>
<td>
No
</td>
<td>
UOB
</td>
<td>
Raw data Documen
ts and figures
</td>
<td>
.dat
.mat
.pdf
</td>
<td>
MB
</td>
<td>
Raw data confidential,
Summing up of results public
</td>
<td>
Partner repository (raw data). Nextcloud project repository
(document/prese ntation summing
up the simulations)
</td>
<td>
monthly
</td>
<td>
Partner repository (raw data). Nextcloud project repository
(document/present ation summing up the simulations)
</td>
<td>
At least 5 years
</td> </tr>
<tr>
<td>
Dataset for the integrated simulator tool assessm.
</td>
<td>
Dataset including all the simulations carried out for the testing of the
integrated tool (WP1)
</td>
<td>
Yes
</td>
<td>
No
</td>
<td>
UOB, UNEW
</td>
<td>
Raw data Documen
ts and figures
</td>
<td>
.dat
.mat
.pdf
</td>
<td>
MB
</td>
<td>
Raw data confidential,
Summing up of results public
</td>
<td>
Partner repository (raw data). Nextcloud project repository
(document/prese ntation summing
up the simulations)
</td>
<td>
monthly
</td>
<td>
Partner repository (raw data). Nextcloud project repository
(document/present ation summing up the simulations)
</td>
<td>
At least 5 years
</td> </tr>
<tr>
<td>
Dataset of laboratory validation
</td>
<td>
Data related to the validation of the simulator in the UNEW Smart Grid
Laboratory
</td>
<td>
Yes
</td>
<td>
No
</td>
<td>
UOB, UNEW
</td>
<td>
Raw data Documen
ts and figures
</td>
<td>
.dat
.mat
.pdf
</td>
<td>
MB
</td>
<td>
Raw data
confidential,
Summing up of results public
</td>
<td>
Partner repository (raw data). Nextcloud project repository
(document/prese ntation summing
up the simulations)
</td>
<td>
monthly
</td>
<td>
Partner repository (raw data). Nextcloud project repository
(document/present ation summing up the simulations)
</td>
<td>
At least 5 years
</td> </tr> </table>
## LIBAL DATA MANAGEMENT PLAN
<table>
<tr>
<th>
**NAME**
</th>
<th>
**DESCRIPTION**
</th>
<th>
**CREATED**
</th>
<th>
**GATHERED**
</th>
<th>
**OWNER**
</th>
<th>
**TYPE**
</th>
<th>
**FORMAT**
</th>
<th>
**SIZE**
</th>
<th>
**DISSEMINATION LEVEL**
</th>
<th>
**REPOSITORY DURING THE**
**PROJECT (FOR**
**PRIVATE/PUBLIC**
**ACCESS)**
</th>
<th>
**BACK-UP FREQUENCY**
</th>
<th>
**REPOSITORY AFTER THE PROJECT**
</th>
<th>
**PRESERVATION**
**AFTER THE END**
**OF THE PROJECT**
**(IN YEARS)**
</th> </tr>
<tr>
<td>
Simulators CELATORS
</td>
<td>
Custom made simulator for testing the electric storage systems
</td>
<td>
Yes
</td>
<td>
NO
</td>
<td>
LIBAL
</td>
<td>
Source code/Software
</td>
<td>
C, C#, .txt
</td>
<td>
MB
</td>
<td>
Confidential
</td>
<td>
Partner repository
</td>
<td>
weekly
</td>
<td>
Partner repository
</td>
<td>
Permanent
</td> </tr>
<tr>
<td>
LIBAL Cloud
</td>
<td>
MS Azure Cloud data collection software
</td>
<td>
YES
</td>
<td>
NO
</td>
<td>
LIBAL
</td>
<td>
Source code/Software
</td>
<td>
N/A
</td>
<td>
MB
</td>
<td>
Confidential
</td>
<td>
Partner repository
</td>
<td>
Continuously
</td>
<td>
LIBAL
</td>
<td>
Permanent
</td> </tr>
<tr>
<td>
Dataset on testing
</td>
<td>
Outcomes of the testing
activities through simulator
</td>
<td>
Yes
</td>
<td>
No
</td>
<td>
LIBAL
</td>
<td>
Raw data and documents
</td>
<td>
.dat
.pdf
</td>
<td>
MB
</td>
<td>
confidential
</td>
<td>
Partner repository
(raw data), project Repository
Nextcloud
(documents)
</td>
<td>
weekly
</td>
<td>
Partner repository (raw data), project Repository
Nextcloud
(documents)
</td>
<td>
At least 5 years
</td> </tr>
<tr>
<td>
Electric
Storage
System
Design
</td>
<td>
Schema and design of Electric
Storage
Systems
</td>
<td>
Yes
</td>
<td>
No
</td>
<td>
LIBAL
</td>
<td>
Documents
and figures
</td>
<td>
.jpg
.pdf
</td>
<td>
MB
</td>
<td>
Confidential
</td>
<td>
Partner repository
</td>
<td>
weekly
</td>
<td>
Partner repository
</td>
<td>
Permanent
</td> </tr>
<tr>
<td>
Dataset validation in UNEW Laboratory
</td>
<td>
Outcomes of the validation activities during the test in UNEW Smart Grid
Laboratory
</td>
<td>
Yes
</td>
<td>
No
</td>
<td>
LIBAL
</td>
<td>
Raw data and documents
</td>
<td>
.dat
.pdf
</td>
<td>
MB
</td>
<td>
Confidential
</td>
<td>
Partner repository
(raw data), project Repository
Nextcloud
(documents)
</td>
<td>
weekly
</td>
<td>
Partner repository (raw data), project Repository
Nextcloud
(documents)
</td>
<td>
At least 5 years
</td> </tr>
<tr>
<td>
Dataset validation in Metro of Madrid
</td>
<td>
Outcomes of the validation activities in the field in Metro of Madrid
</td>
<td>
Yes
</td>
<td>
No
</td>
<td>
LIBAL
</td>
<td>
Raw data and documents
</td>
<td>
.dat
.pdf
</td>
<td>
MB
</td>
<td>
Raw data confidential
</td>
<td>
Partner repository
(raw data), project Repository
Nextcloud
(documents)
</td>
<td>
weekly
</td>
<td>
Partner repository (raw data), project Repository
Nextcloud
(documents)
</td>
<td>
At least 5 years
</td> </tr> </table>
## MDM DATA MANAGEMENT PLAN
<table>
<tr>
<th>
**NAME**
</th>
<th>
**DESCRIPTION**
</th>
<th>
**CREATED**
</th>
<th>
**GATHERED**
</th>
<th>
**OWNER**
</th>
<th>
**TYPE**
</th>
<th>
**FORMAT**
</th>
<th>
**SIZE**
</th>
<th>
**DISSEMINATION LEVEL**
</th>
<th>
**REPOSITORY DURING THE**
**PROJECT (FOR**
**PRIVATE/PUBLIC**
**ACCESS)**
</th>
<th>
**BACK-UP FREQUENCY**
</th>
<th>
**REPOSITORY**
**AFTER THE**
**PROJECT**
</th>
<th>
**PRESERVATION**
**AFTER THE END**
**OF THE PROJECT**
**(IN YEARS)**
</th> </tr>
<tr>
<td>
MDM Data
files from substation equipment
</td>
<td>
Recording data available from the different substation equipment to be used
for simulations
</td>
<td>
Yes
</td>
<td>
No
</td>
<td>
MDM
</td>
<td>
Document, raw data
</td>
<td>
.xls
</td>
<td>
100
MB
</td>
<td>
Consortium
</td>
<td>
Internal partner repository, Project repository
(Nextcloud)
</td>
<td>
weekly
</td>
<td>
Internal partner repository, Project repository (Nextcloud)
</td>
<td>
5
</td> </tr>
<tr>
<td>
MDM
Measurement data from the substation equipment
</td>
<td>
Real measurement from the substation equipment to be used for simulations
</td>
<td>
Yes
</td>
<td>
No
</td>
<td>
MDM
</td>
<td>
Document, raw data
</td>
<td>
.xls
</td>
<td>
10 MB
</td>
<td>
Consortium
</td>
<td>
Internal partner repository, Project repository
(Nextcloud)
</td>
<td>
weekly
</td>
<td>
Internal partner repository, Project repository (Nextcloud)
</td>
<td>
5
</td> </tr>
<tr>
<td>
MDM
Installations
</td>
<td>
Photographs of different installations in
MDM
</td>
<td>
Yes
</td>
<td>
No
</td>
<td>
MDM
</td>
<td>
Images
</td>
<td>
.jpg
</td>
<td>
100
MB
</td>
<td>
Consortium
</td>
<td>
Internal partner repository, Project repository
(Nextcloud)
</td>
<td>
weekly
</td>
<td>
Internal partner repository, Project repository (Nextcloud)
</td>
<td>
5
</td> </tr> </table>
## UNEW DATA MANAGEMENT PLAN
<table>
<tr>
<th>
**NAME**
</th>
<th>
**DESCRIPTION**
</th>
<th>
**CREATED**
</th>
<th>
**GATHERED**
</th>
<th>
**OWNER**
</th>
<th>
**TYPE**
</th>
<th>
**FORMAT**
</th>
<th>
**SIZE**
</th>
<th>
**DISSEMINATION LEVEL**
</th>
<th>
**REPOSITORY DURING THE**
**PROJECT (FOR**
**PRIVATE/**
**PUBLIC ACCESS)**
</th>
<th>
**BACK-UP FREQUENCY**
</th>
<th>
**REPOSITORY**
**AFTER THE**
**PROJECT**
</th>
<th>
**PRESERVATION**
**AFTER THE END**
**OF THE PROJECT**
**(IN YEARS)**
</th> </tr>
<tr>
<td>
Energy losses evaluation data on
</td>
<td>
These are the simulation data based on
Madrid Metro
</td>
<td>
Yes
</td>
<td>
No
</td>
<td>
UNEW
</td>
<td>
Raw data
Documents
and figures
</td>
<td>
.dat
.mat
.xls
.pdf
</td>
<td>
MB
</td>
<td>
Raw data confidential,
Summing up of results public
</td>
<td>
UNEW repository (raw data). Nextcloud project repository
</td>
<td>
daily
</td>
<td>
UNEW repository (raw data). Nextcloud project repository
</td>
<td>
At least 5 years
</td> </tr>
<tr>
<td>
Madrid Metro 12
</td>
<td>
Line 12. The data describe the load flow study results as part of the
simulation study in T1.2
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
(document/presen tation summing up the simulations)
</td>
<td>
</td>
<td>
(document/prese ntation summing
up the simulations)
</td>
<td>
</td> </tr>
<tr>
<td>
Madrid
Metro
Line 12
</td>
<td>
This data is used for the simulation to study the energy losses (WP1, WP5)
</td>
<td>
Yes
</td>
<td>
No
</td>
<td>
MDM
</td>
<td>
Raw data
Documents
and figures
</td>
<td>
.xls .pdf
</td>
<td>
MB
</td>
<td>
Raw data confidential
</td>
<td>
Partner repository Nextcloud project repository
</td>
<td>
monthly
</td>
<td>
Partner repository Nextcloud project repository
</td>
<td>
At least 5 years
</td> </tr>
<tr>
<td>
Dataset
</td>
<td>
WP5_demnsta
rtion
</td>
<td>
Yes
</td>
<td>
Yes
</td>
<td>
UNEW
</td>
<td>
Experim.
</td>
<td>
.xls
.pdf
.mat
</td>
<td>
kB
</td>
<td>
confidential
</td>
<td>
Private cloud
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Smart Grid Simulator
</td>
<td>
Simulator for Energy losses evaluation (WP1)
</td>
<td>
Yes
</td>
<td>
No
</td>
<td>
UNEW
</td>
<td>
Source code/Softw are
</td>
<td>
.m
.mat
</td>
<td>
MB
</td>
<td>
Confidential
</td>
<td>
UNEW internal repository
</td>
<td>
monthly
</td>
<td>
UNEW internal repository
</td>
<td>
Permanent
</td> </tr>
<tr>
<td>
Dataset for the integrated simulator
tool
assessmen
t
</td>
<td>
Dataset including all the simulations carried out for the testing of the
integrated tool (WP1)
</td>
<td>
Yes
</td>
<td>
No
</td>
<td>
UOB, UNEW
</td>
<td>
Raw data
Documents
and figures
</td>
<td>
.dat
.mat
.xls
.pdf
</td>
<td>
MB
</td>
<td>
Raw data confidential,
Summing up of results public
</td>
<td>
UNEW repository (raw data). Nextcloud project repository
(document/presen tation summing up the simulations)
</td>
<td>
daily
</td>
<td>
UNEW repository (raw data). Nextcloud project repository
(document/prese ntation summing
up the simulations)
</td>
<td>
At least 5 years
</td> </tr>
<tr>
<td>
Dataset of laboratory validation
</td>
<td>
Data related to the validation of the simulator in the UNEW Smart Grid
Laboratory
</td>
<td>
Yes
</td>
<td>
No
</td>
<td>
UOB, UNEW
</td>
<td>
Raw data
Documents
and figures
</td>
<td>
.dat
.mat
.xls
.pdf
</td>
<td>
MB
</td>
<td>
Raw data confidential,
Summing up of results public
</td>
<td>
UNEW repository (raw data). Nextcloud project repository (document/presen
tation summing up the simulations)
</td>
<td>
daily
</td>
<td>
UNEW repository (raw data). Nextcloud project repository (document/prese
ntation summing
up the simulations)
</td>
<td>
At least 5 years
</td> </tr> </table>
## FFE DATA MANAGEMENT PLAN
<table>
<tr>
<th>
**NAME**
</th>
<th>
**DESCRIPTION**
</th>
<th>
**CREATED**
</th>
<th>
**GATHERED**
</th>
<th>
**OWNER**
</th>
<th>
**TYPE**
</th>
<th>
**FORMAT**
</th>
<th>
**SIZE**
</th>
<th>
**DISSEMINATION LEVEL**
</th>
<th>
**REPOSITORY DURING THE**
**PROJECT**
**(FOR**
**PRIVATE/**
**PUBLIC**
**ACCESS)**
</th>
<th>
**BACK-UP FREQUENCY**
</th>
<th>
**REPOSITORY**
**AFTER THE**
**PROJECT**
</th>
<th>
**PRESERVATION**
**AFTER THE END**
**OF THE PROJECT**
**(IN YEARS)**
</th> </tr>
<tr>
<td>
Data of consumer consumption.
</td>
<td>
Analysis of consumer behavior. Information on
consumptions and tools for reduction of energy consumption.
</td>
<td>
No
</td>
<td>
Y (from previous projects)
</td>
<td>
FFE
</td>
<td>
report
</td>
<td>
.doc
</td>
<td>
MB
</td>
<td>
Public-use
</td>
<td>
Nextcloud and partner Private-cloud
</td>
<td>
daily
</td>
<td>
Partner repositoryPrivate cloud
</td>
<td>
10 years
</td> </tr>
<tr>
<td>
Current operating standards
</td>
<td>
The document
will describe the actual standards and analyse the specific national
regulations. It will be done also a revision of the applicable standards.
</td>
<td>
N0
</td>
<td>
Y (From state of art)
</td>
<td>
FFE
</td>
<td>
report
</td>
<td>
.doc (most probable
) or xlsl
</td>
<td>
MB
</td>
<td>
Public Use
</td>
<td>
Nextcloud and partner Private-cloud
</td>
<td>
daily
</td>
<td>
Partner repositoryPrivate cloud
</td>
<td>
10 years
</td> </tr>
<tr>
<td>
Workshop feedbacks
</td>
<td>
The document
will delineate the results of Stakeholders’ group workshops.
It will be described the most relevant factors and it will be collected the
evidence coming from this
workshops and
this activities
</td>
<td>
Yes
</td>
<td>
No
</td>
<td>
FFE
</td>
<td>
Docum
ent (report)
</td>
<td>
.doc
</td>
<td>
MB
</td>
<td>
Consortium
</td>
<td>
Nextcloud and partner Private-cloud
</td>
<td>
daily
</td>
<td>
Partner repositoryPrivate cloud
</td>
<td>
10 years
</td> </tr>
<tr>
<td>
</td>
<td>
groups.
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
## UITP DATA MANAGEMENT PLAN
<table>
<tr>
<th>
**NAME**
</th>
<th>
**DESCRIPTION**
</th>
<th>
**CREATED**
</th>
<th>
**GATHERED**
</th>
<th>
**OWNER**
</th>
<th>
**TYPE**
</th>
<th>
**FORMAT**
</th>
<th>
**SIZE**
</th>
<th>
**DISSEMINATION LEVEL**
</th>
<th>
**REPOSITORY DURING THE**
**PROJECT**
**(FOR**
**PRIVATE/**
**PUBLIC**
**ACCESS)**
</th>
<th>
**BACK-UP FREQUENCY**
</th>
<th>
**REPOSITORY**
**AFTER THE**
**PROJECT**
</th>
<th>
**PRESERVATION**
**AFTER THE END**
**OF THE**
**PROJECT (IN**
**YEARS)**
</th> </tr>
<tr>
<td>
Preliminary
Stakeholder
Workshop: E-
LOBSTER
Stakeholder
Workshops
No.1
Stakeholder
Workshop on
Electric PT – M13 – M36: Preliminary time Oct – Nov 2019
</td>
<td>
A preliminary workshop on
Electric PT
Stakeholder
Group – Feedback from the
workshop
received in the forms of questionnaire surveys, meeting notes, documents
</td>
<td>
Yes
</td>
<td>
Yes
</td>
<td>
External
</td>
<td>
Text or word documents, spreadsheets
questionnaire
, transcripts, meeting minutes ; audiotapes, videotapes ; database contents ;
publications, reports, dissemination materials
</td>
<td>
PDF
</td>
<td>
MB
</td>
<td>
Consortium
</td>
<td>
E-LOBSTER
file repository (Nextcloud)
</td>
<td>
weekly
</td>
<td>
E-LOBSTER file repository (Nextcloud)
</td>
<td>
Maintained – 5 years
</td> </tr>
<tr>
<td>
Preliminary
Stakeholder
Workshop: E-
LOBSTER
Stakeholder
Workshops
No.2
Stakeholder
Workshop
Electric
Distribution
</td>
<td>
A preliminary workshop on
Electric
Distribution Network Operators and Tech
Providers
Stakeholder
Group – Feedback
</td>
<td>
Yes
</td>
<td>
Yes
</td>
<td>
External
</td>
<td>
Text or word documents, spreadsheets questionnaire transcripts, meeting
minutes ; audiotapes, videotapes ; database contents ;
</td>
<td>
PDF – descripti on available
</td>
<td>
MB
</td>
<td>
Consortium
</td>
<td>
E-LOBSTER
file repository (Nextcloud)
</td>
<td>
weekly
</td>
<td>
E-LOBSTER file repository (Nextcloud)
</td>
<td>
Maintained – 5 years
</td> </tr>
<tr>
<td>
Network Operators and Tech
Providers) – M13 – M36: Preliminary time March – April 2020
</td>
<td>
from the
workshop
received in the forms of
questionnaire s, surveys, meeting
notes, documents
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
publications, reports, dissemination materials
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Joint
Stakeholder
Workshop: E-
LOBSTER
Stakeholder
Workshops
No.3
Stakeholder
Workshop
Electric PT and Electric Distribution
Network Operators and Tech Providers ) – (M13 – M36): Preliminary time Feb –
April 2021
</td>
<td>
A joint
workshop on
Electric PT and Electric Distribution
Network Operators and Tech
Providers
Stakeholder
Groups – Feedback from the
workshop
received in the forms of
questionnaire s, surveys, meeting
notes, documentats
</td>
<td>
Yes
</td>
<td>
Yes
</td>
<td>
External
</td>
<td>
Text or word documents, spreadsheets
;
questionnaire s, transcripts, meeting minutes ; audiotapes, videotapes ;
database contents ; publications, reports, dissemination materials
</td>
<td>
PDF – descripti on available
</td>
<td>
MB
</td>
<td>
Consortium
</td>
<td>
E-LOBSTER
file repository (Nextcloud)
</td>
<td>
Yearly (After the workshop organised and conducted)
</td>
<td>
E-LOBSTER file repository (Nextcloud)
</td>
<td>
Maintained – 5 years
</td> </tr> </table>
# Research Data Repositories
Although E-LOBSTER decided do not participate to the Pilot Action on Open
Research Data in order do not hamper the commercial interests of the
industrial partners, at this stage of the project, the ELOBSTER consortium
investigated about potential research data Repositories to be used for the
sharing of no confidential data (e.g. papers).
Among the different possibilities, ZENODO ( _http://www.zenodo.org/_ ) which
is the cost free open access repository of OpenAIRE (the Open Access
Infrastructure for Research in Europe, _https://www.openaire.eu/_ ) has been
considered. OpenAIRE is an EC-funded initiative which aims to support the Open
Access policy of the European Commission via a technical infrastructure.
Actually, OpenAIRE has grown through a series of project phases funded by the
European Commission 4 : from the DRIVER projects to link Europe’s repository
infrastructure, to the first OpenAIRE project aimed to assist the EC in
implementing its initial pilot for Open Access (OA) to publications, and,
through several further phases which have extended and consolidated the
OpenAIRE mission to implement Open Science policies. OpenAIRE currently
operates an interoperable and validated network of more than 520 repositories
and OA journals (integrating more than 9 million OA publications and 1,000
datasets)
It has identified over 100,000 FP7 publications from about half the 26,000 FP7
projects, and offers literature-data integration services.
In particular, it is OpenAIRE platform, the technical infrastructure that is
key for interconnecting the large-scale collections of research outputs across
Europe. It creates workflows and services on top of this valuable repository
content, which enable an interoperable network of repositories (via the
adoption of common guidelines), and easy upload into an all-purpose repository
(via ZENODO) 5 . Recently, the 29 th of October 2018, OpenAIRE became a
fully fledged organisation, with the formation of its legal entity, **OpenAIRE
A.M.K.Ε.** , a non-profit partnership, to ensure a permanent presence and
structure for a European-wide national policy and open scholarly communication
infrastructure.
So far the objective of the OpenAIRE portal was to make as much European
funded research output as possible available to all. Institutional
repositories are typically linked to it. Furthermore, dedicated pages per
project are visible on the OpenAIRE portal, making research output
(publications, datasets or simple project information) accessible through the
portal thanks to the bibliographic metadata linked to each publication.
Concerning ZENODO, the OpenAIRE project, as explained above, is in the
vanguard of the open access and open data movements in Europe and was
commissioned by the EC to support its Open Data policy by providing a catch-
all repository for EC funded research. CERN (an OpenAIRE partner and pioneer
in open source, open access and open data) provided this capability and ZENODO
was launched in May 2013. In support of its research programme CERN has
developed tools for Big Data management and extended Digital Library
capabilities for Open Data. Through ZENODO these Big Science tools could be
effectively shared in the research sector 6 .
**Figure 2: Home page of Zenodo**
In addition to ZENODO, among the other repositories to be considered there is
also re3data.org.
re3data.org 7 is a global registry of research data repositories that covers
research data repositories from different academic disciplines. It presents
repositories for the permanent storage and access of data sets to researchers,
funding bodies, publishers and scholarly institutions. re3data.org promotes a
culture of sharing, increased access and better visibility of research data.
The registry went live in autumn 2012 and is funded by the German Research
Foundation (DFG). Project partners in re3data.org are the Berlin School of
Library and Information Science at the Humboldt-Universität zu Berlin, the
Library and Information Services department (LIS) of the GFZ German Research
Centre for Geosciences, the KIT Library at the Karlsruhe Institute of
Technology (KIT) and the Libraries of the Purdue University. The German
partners are actively involved in the German Initiative for Network
Information (DINI) and current research data management activities.
**Figure 3: Home page of re3data.org**
# Conclusions
This deliverable represents the E-LOBSTER Data Management Plan at month 6. The
scope of this Data Management Plan is to describe the data management life
cycle for the data to be collected, processed and/or created in the framework
of the E-LOBSTER project.
In particular, this document specifies how the E-LOBSTER research data will be
handled in the framework of the project as well as after its completion.
More in detail, the report indicated:
* what data will be collected, processed and/or created and from whom
* which data will be shared and which one will be maintained confidential
* how and where the data will be stored during the project
* which backup strategy will be applied for safely maintaining the data
* how the data will be preserved after the end of the project
In particular, in the report both the data management plans at project and at
individual partner level have been presented.
The present Data Management Plan has to be considered as a living document,
and any future update or change in the E-LOBSTER data management policy will
be included in the periodic reports or will be specified in the deliverables
related to the specific tasks. In particular, the Data Management Plan will be
refined according to the IPR strategy that will be defined for the E-LOBSTER
exploitable results.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1386_EOPEN_776019.md
|
**Executive Summary**
In the current version (1.2) the technical definition of datasets stemming
from the needs of the three PUCs, namely, the _PUC requested datasets_ [AD1]
has been addressed; among others, technical definition is aimed at associating
univocally the requested datasets to products available either from some
Copernicus thematic service or from some other sources, and to specify those
that will be generated in the platform; moreover, in some cases, datasets have
turned out to correspond to more than one product, so they have been split;
eventually the resulting selection of the derived EOPEN products appears to be
suitable for further usage, beyond the project PUCs.
Also, in the current version, the product FAIR characteristics are addressed,
including usage constraints as well as LTDP needs; this topic has been
included following some comments received at the first review, in view of
product usage beyond the project Consortium and its duration.
# 1 DATA SUMMARY
## 1.1 Introduction
EOPEN provides a framework in which user applications are managed and are
provided with the means to access datasets and perform processing. All of
which allow interoperability supported by the EOPEN core capabilities.
In addition, EOPEN provides Service Extensions. They are developed by the
EOPEN team and made available to all users of the EOPEN Framework typically
fulfilling data preparation tasks or scientific algorithms and processing
methodologies.
Generally speaking, the EOPEN Framework does not retain data. EOPEN is
“application agnostic”, implying that it is transparent with regards to data
characteristics, including access restrictions and data storage.
However, Service Extensions can take data, apply enhancements (processing and
data fusion) to create data used by the user applications.
The following picture depicts the high level EOPEN operational concept and
dataflow.
## 1.2 User Applications
All data collected and made available in the platform relates to user
applications. In the EOPEN project the user applications are represented by
three use cases.
In the case of PUC1-Flood Risk Assessment and Prevention, the partner involved
is a public body (AAWA), currently not using satellite data systematically,
which is interested to receive mapped information derived from EO satellite
data which can be used to improve its models, data as well as additional
meteorological forecast data; normally such datasets are open and can be made
available to identified stakeholders [AD1] or to others not yet reached.
In the case of PUC2-Food Security, several institutes, research or public
agencies, are involved - this Use Case, located outside Europe, is connected
to some national institutions of Korea; some datasets, provided by the
stakeholders, appear of restricted usage, nevertheless some datasets generated
in the platform (e.g. rice mapping) will be accessible. In addition to the
datasets identified during the requirement analysis, PUC2 can also benefit
from some meteorological data and data analyses available through the Korean
Meteorological Service.
Finally, in the case of PUC3-Monitoring Climate Change, the partner
responsible for its implementation is a national service - the identified
datasets can serve a community of users which can be extended; also, a usage
transversal to various user communities can be envisaged. Identified datasets
are mainly of non-EO type; in addition to them some "EO based" datasets are
being considered, following the 1 st project review meeting. PUC3 datasets
are normally open.
The datasets for the use cases handled in the platform belong to the following
typologies:
1a. EO data products available from Copernicus Data Ware House (DWH),
including proprietary datasets; (ref. 6.3)
1b. EO data from the EOPEN Umbrella Hub (former Collaborative Ground Segment,
CollGS) [RD1];
2. Datasets stemming from Copernicus Thematic Services; (ref. 6.2.3)
3. Datasets stemming from Copernicus EMS services; (ref. D1.1)
4. Datasets stemming from other EC Project (Open Data); (ref. 6.2)
5. Non-EO datasets, including meteorological data; (ref. 6.2)
6. Datasets generated in the EOPEN platform; (ref. 5)
7. Datasets available from stakeholders, including proprietary datasets, or generated by them using resources available from the EOPEN platform. (ref. 6.2)
The needed datasets are mainly EO satellite data/products from Copernicus DWH;
some environmental variables, derived from EO data - Essential Climate
Variables (ECV) (e.g. LAI) - available from Copernicus Land Services and some
non-EO data, such as climatic data and social media data, in addition to
weather forecast (derived from models including satellite data).
EO data products from Copernicus Warehouse are mainly open data; some EO data
from the Copernicus Contributing Missions are available in limited amounts
(Quotas) during the Project lifetime (ref. 6.3).
The type of Requested Datasets, typical of the application area at the basis
of the Use Cases, depend also on the stakeholders involved in the project.
[AD3]
The FAIR paradigm is addressed in section 4, focused on the EOPEN product
description.
Whereas the current deliverable deals with a general product description and
management, other deliverables describe in detail how input data are accessed
and handled - in particular, [RD1] is focused on a unique Sentinel Data Hub
access point, which is called Umbrella Hub; in addition to it, access to DVH
data access through DIAS is addressed in [RD3] (M26); [RD4] (M26) is focused
on meteorological and climatological data acquisition; and [RD5] (M32) is
focused on social media crawlers to handle social media data.
Standardization, Interoperability and long-term data preservation aspects are
the focus of other WPs (WP6, WP8,) so they will be treated in detail in other
deliverables (D6.1, D8.6).
Some information about allocated resources and data security is provided in
the sections 2 and
3\.
# 2 ALLOCATION OF RESOURCES
Data in the EOPEN project is acquired for, used by and produced by the user
applications of the Use Cases. In many cases the data is from open and freely
available sources such as Copernicus. Data produced by the use case
applications in the EOPEN project is mainly in the public domain, however some
data, for example social media-tweets, are constrained by specific ownership
and privacy regulations.
How to proceed in making the EOPEN datasets that are not by nature of their
source already FAIR compliant is currently under evaluation.
As to costs, this will depend on the adopted solution; such topic will be
addressed in the future months; also, an aspect to be clarified is which data
can be classified as research thus chargeable to the project. Which data are
subject to FAIR is described in section 4.
Serco, as responsible of DMS, will oversee this activity with the support of
SPACEAPPS.
Some LTDP resources have been planned. Firstly, “EOPEN has the responsibility
for providing the means to produce data that will survive (i.e. standards and
formats applied)”. The concepts, design and implementation of the framework
core ensure that users create well defined processes and data exchanges.
Secondly, the EOPEN LTDP framework, deliverable of WP8.5, will ensure the
accessibility and ability to re-use EOPEN datasets, in line with the EOPEN
data management plan. “The LTDP framework will be based on the preservation by
design approach researched in the PERICLES”. (GA, Annex 1, p. 124)
LTDP is also a prerequisite for long term sustainability of EOPEN.
# 3 DATA SECURITY
Overseeing security is part of SERCO’s leading tasks.
User management, privacy and security requirements are defined to establish
procedures, software and infrastructure to ensure that the platform and its
users is protected from unauthorised access, accidental revealing of
privileged information and breaches by those with ill-intent are defined in
D6.1.
EOPEN has in place user authentication and applies protocols for communicating
with other platforms. PA/QA procedures to audit platforms used for processing
e.g. DIAS/TEPs etc. are in place; moreover, security in the tables used to
compare DIAS platforms is also in place (tbc).
The EOPEN project uses only European cloud providers that have clear defined
security policies and implement precaustions such as security solutions such
as the anti-DDos service.
DIAS-ONDA is such a provider offering an anti-DDos service with operations
designed to be security compliant with ISO/IEC 27000 and Data Protection
Directive 95/46/EC.”
[Anti-DDos (Distributed Denial of Service) protects against some informatics
attacks
(https://en.wikipedia.org/wiki/Denial-of-
service_attack#Distributed_DoS_attack)]
Hereafter some information on data security is reported, by referring to the
related deliverable. **Umbrella Hub**
[RD1] describes such Umbrella, acting as a broker of existing hubs - it does
not add any extra information value, but merely distributes existing data of
the Sentinel Access points; below some features related to security are
described.
__Database Access_ _
Establishing a database connection requires the following credentials:
database name, username, password, host IP and port. These credentials are
used by the application to interact with the database and securing them is
quite important. Therefore, the database stored credentials are retrieved in
the following ways:
* A JSON file stored in the working directory.
* Environmental variables.
__REST API Access_ _
The application allows users to retrieve resource representation/information,
only while the modification of it is restricted in every way. As GET requests
do not modify the state of the resource, these are considered to be safe
methods.
__Metadata harvesting and sharing_ _
Metadata are harvested from the hubs by sending requests to the respective
APIs. Requests are based on the hubs’ individual security policy. This policy
includes authorization via credentials for accessing the API of each hub. The
hubs’ generated credentials are stored in the database and retrieved every
time the request is constructed. In this way, Umbrella hub collects and saves
the metadata from different endpoints. As a result, the obtained metadata are
made available for downloading by any user. The download process requires
Umbrella hub users to be registered, in turn, to the different hubs connected,
as this process requires also authorization. Finally, download integrity can
be verified using the MD5 checksum provided by the Umbrella hub.
[RD2] is about setting up and providing access through the EOPEN platform to
the Big Data and HPC infrastructure located at HLRS. Data locality is of
utmost importance in order to achieve maximum performance, and thus both the
submission of new processing jobs and input data are required to be
transferred in a secure and fast manner to these systems before processing can
start. [RD2] detailed the setup of Cloudify, an orchestration tool to submit
workflows via a RESTful service, between the EOPEN platform and HLRS. Allowing
workflow submissions to the HPC system with Cloudify is achieved with
milestone MS1, where security is guaranteed by using secure communication via
SSL, meaning that all communication between client and server is encrypted.
Further, confidential information is hidden in data that is sent through
Cloudify by using Cloudify’s own secret store. Next actions include enabling a
secure and fast data transfer from and to HLRS infrastructure via Rucio. Rucio
is a scientific data management framework that is developed by CERN with the
intention to transfer very large data. EOPEN is following here best practices
for authentication and authorization, and the actual transfer will benefit
from parallel data streams such as provided by the GridFTP protocol to ensure
a fast and secure data transfer between sites.
# 4 FROM PUC DATASETS TO EOPEN PRODUCTS
The PUC requested datasets [AD1] have been further considered in view to
associate them to products available from some Copernicus service or from some
other sources, or to identify suitable products to be generated in the
platform by applying EO data analyses. In some cases, a dataset has been split
in two as it turned out to include different data typologies (ref. section 1).
A selection of EOPEN products has been derived, generalized for further usage,
beyond the project PUCs.
PUC Requested Datasets and resulting products are summarized in the tables
presented in the sections 4.1.3, 4.2.3 and 4.3.3 - a colour code has been
associated to each data type as shown in the Legend below. Products are
described in detail in sections 5 and 6.
Initially, the PUC requested datasets have been analysed in terms of dataset
scope, EO and NonEO data needs, data sources, archiving location, time scope,
and volumes. A summary of the EO data-products needed from the Copernicus DWH
is reported in section 6.3.
Normally such information is aligned to the most recent available information,
being chapters 56 the “living part” of the document.
Below is a list of the information collected by means of a google spreadsheet
- the second and third rows refer to the EO data-products, made available
through the Copernicus DWH, including data-products available under Quota
amounts restrictions (ref. section 6.3) during the project life time.
<table>
<tr>
<th>
1\.
</th>
<th>
EO Missions
</th> </tr>
<tr>
<td>
2\.
</td>
<td>
DAP Datasets
</td> </tr>
<tr>
<td>
3\.
</td>
<td>
Product
</td> </tr>
<tr>
<td>
4\.
</td>
<td>
Format
</td> </tr>
<tr>
<td>
5\.
</td>
<td>
Source
</td> </tr>
<tr>
<td>
6\.
</td>
<td>
Access Restrictions
</td> </tr>
<tr>
<td>
7\.
</td>
<td>
Storage
</td> </tr>
<tr>
<td>
8\.
</td>
<td>
Area
</td> </tr>
<tr>
<td>
9\.
</td>
<td>
Time Interval
</td> </tr>
<tr>
<td>
10
</td>
<td>
. Data Volume
</td> </tr>
<tr>
<td>
11
</td>
<td>
. Scope
</td> </tr>
<tr>
<td>
12
</td>
<td>
. Notes
</td> </tr> </table>
Table 1: Requested Datasets information
PUC requested datasets and the derived products are reported in the sections
4.1-3, FAIR characteristics and LTDP needs are summarized in section 4.4.
## 4.1 PUC1 - Flood risk assessment and prevention datasets
### 4.1.1 PUC1 Requested Datasets
<table>
<tr>
<th>
**Use case D-id**
</th>
<th>
**Dataset Description**
</th> </tr>
<tr>
<td>
PUC1_DA1
</td>
<td>
DEM/DSM 1m (e.g. Airbus Pleiades) from DWH
</td> </tr>
<tr>
<td>
PUC1_DA2
</td>
<td>
Snow maps with a resolution < 20m
</td> </tr>
<tr>
<td>
PUC1_DA3
</td>
<td>
Soil moisture maps with resolution < 10m
</td> </tr>
<tr>
<td>
PUC1_DA4
</td>
<td>
Flood maps (*)
</td> </tr>
<tr>
<td>
PUC1_DA5
</td>
<td>
Damage maps (**)
</td> </tr>
<tr>
<td>
PUC1_DA6
</td>
<td>
Water presence maps
</td> </tr>
<tr>
<td>
PUC1_DA7
</td>
<td>
Bathymetry of coast, lakes, rivers
</td> </tr>
<tr>
<td>
PUC1_DA8
</td>
<td>
Ortho-photo with resolution of 50 cm (e.g. WorldView4)
</td> </tr>
<tr>
<td>
PUC1_DA9
</td>
<td>
Vegetation presence
</td> </tr>
<tr>
<td>
PUC1_DA10
</td>
<td>
Land cover
</td> </tr>
<tr>
<td>
PUC1_DA11
</td>
<td>
LAI and other vegetation indexes
</td> </tr>
<tr>
<td>
PUC1_DA12
</td>
<td>
Other maps (thermal or multispectral data ready to be processed) with high
resolution
</td> </tr>
<tr>
<td>
PUC1_DA13
</td>
<td>
Weather forecast
</td> </tr>
<tr>
<td>
PUC1_DA14
</td>
<td>
Social media
</td> </tr> </table>
Table 2: PUC1 Requested Datasets [AD1]
(*) Generated by the stakeholder at his premises or exploiting EOPEN
resources.
(**) From Copernicus EMS or generated by the stakeholder at his premises.
### 4.1.2 PUC1 Data Summary Information
Table 3: PUC1 Data Summary
### 4.1.3 PUC1 Products (*)
<table>
<tr>
<th>
**Dataset ID.**
</th>
<th>
**Product Name/acronym**
</th>
<th>
**Web Location (**)**
</th> </tr>
<tr>
<td>
PUC1_DA2.1
</td>
<td>
Snow Water Equivalent/ SWE_CGLS
(**)
</td>
<td>
**_https://land.copernicus.vgt.vito.b_ ** **_e/PDF/portal/Application.html#Br_
** **_owse;Root=512260;Collection=10_ **
**_00061;Time=NORMAL,NORMAL,-_ **
**_1,,,-1_ **
</td> </tr>
<tr>
<td>
PUC1_DA2.2
</td>
<td>
Snow Cover Extent/SCE_CGLS (**)
</td>
<td>
**_https://land.copernicus.vgt.vito.b_ ** **_e/PDF/portal/Application.html#Br_
** **_owse;Root=1000101;Collection=2_ **
**_9870071;Time=NORMAL,NORMAL_ **
**_,-1,,,-1_ ** , ,
</td> </tr>
<tr>
<td>
PUC1_DA3
</td>
<td>
Surface Soil Moisture/SME_CGLS
(**)
</td>
<td>
**_https://land.copernicus.vgt.vito.b_ ** **_e/PDF/portal/Application.html#Br_
** **_owse;Root=71027541;Collection=_ **
**_1000282;Time=NORMAL,NORMAL_ **
**_,-1,,,-1_ ** , ,
**_https://land.copernicus.eu/global_ **
**_/sites/cgls.vito.be/files/products/_ **
**_CGLOPS1_PUM_SSM1km-_ **
**_V1_I1.30.pdf_ **
</td> </tr>
<tr>
<td>
PUC1_DA4
</td>
<td>
EOPEN AAWA AMICO Early
Warning System Flood Forecast
/AA_EWS_FF
</td>
<td>
</td> </tr>
<tr>
<td>
PUC1_DA5
</td>
<td>
EC EMS Damage Map (or others)/
EC_EMS_DM (tbc)
</td>
<td>
</td> </tr>
<tr>
<td>
PUC1_DA6_a
</td>
<td>
Water Presence Map/WPM
</td>
<td>
</td> </tr>
<tr>
<td>
PUC1_DA6_b
</td>
<td>
Water Presence Change
Monitoring/ WPCM
</td>
<td>
</td> </tr>
<tr>
<td>
PUC1_DA11.1
</td>
<td>
Leaf Area Index/LAI_CGLS
(**)
</td>
<td>
https://land.copernicus.vgt.vito.be /PDF/datapool/Vegetation/Propert
ies/LAI_300m_V1/2019/05/20/
</td> </tr>
<tr>
<td>
PUC1_DA11.2
</td>
<td>
Fraction of vegetation Cover
/FCOVER_CGLS (**)
</td>
<td>
https://land.copernicus.vgt.vito.be /PDF/portal/Application.html#Bro
wse;Root=512260;Collection=1000
061;Time=NORMAL,NORMAL,-1,,,-
1
</td> </tr>
<tr>
<td>
PUC1_DA11.3
</td>
<td>
Normalized Difference Vegetation
Index /NDVI_CGLS (**)
</td>
<td>
https://land.copernicus.vgt.vito.be /PDF/portal/Application.html#Bro
wse;Root=513186;Collection=1000
063;Time=NORMAL,NORMAL,-1,,,-
1
</td> </tr>
<tr>
<td>
PUC1_DA11.4
</td>
<td>
Vegetation Condition Index
VCI_CGLS (**)
</td>
<td>
https://land.copernicus.vgt.vito.be /PDF/portal/Application.html#Bro
wse;Root=513186;Collection=7287 79;Time=NORMAL,NORMAL,-1,,,-1
</td> </tr>
<tr>
<td>
PUC1_DA13.1
</td>
<td>
FMI Weather Forecast
(HIRLAM_NWP)
</td>
<td>
</td> </tr>
<tr>
<td>
PUC1_DA14
</td>
<td>
Tweet datasets/TD (tbc)
</td>
<td>
</td> </tr> </table>
Table 4: PUC1 Products
(*) Copernicus DWH data-products are not included in this table
(**) CGLS => Copernicus Land Service
## 4.2 PUC2: Food Security
### 4.2.1 PUC2 Requested Datasets
<table>
<tr>
<th>
**Use case D-**
**id**
</th>
<th>
**Dataset Description**
</th> </tr>
<tr>
<td>
PUC2_DB1
</td>
<td>
High resolution remote sensing imagery
</td> </tr>
<tr>
<td>
PUC2_DB2
</td>
<td>
Meteorological observation (a) and forecasting data (b)(*)
</td> </tr>
<tr>
<td>
PUC2_DB3
</td>
<td>
In field inspection data (**)
</td> </tr>
<tr>
<td>
PUC2_DB4
</td>
<td>
Farmers’ claims data (**)
</td> </tr>
<tr>
<td>
PUC2_DB5
</td>
<td>
Accurate yield statistics (***)
</td> </tr>
<tr>
<td>
PUC2_DB6
</td>
<td>
EO based production status (****)
</td> </tr>
<tr>
<td>
PUC2_DB7
</td>
<td>
Statistical data on national fertilizer usage (**)
</td> </tr>
<tr>
<td>
PUC2_DB8
</td>
<td>
Social media
</td> </tr> </table>
Table 5: PUC2 Requested Datasets [AD1]
(*) It has turned out of no use in PUC2
(**) After a crosscheck with the stakeholders, it has turned out that such
data will not be available.
(***) See note Table 7.
(****) This dataset has been removed following the redefinition of PUC2_DB1
(see section 4.2.3)
**D1.4 V1.2**
<table>
<tr>
<th>
**Dataset ID.**
</th>
<th>
**Description**
</th>
<th>
**Product Short Name/Acronym**
</th> </tr>
<tr>
<td>
PUC2_DB1_a
</td>
<td>
Timeseries of vegetation indices and crop growth indicators
</td>
<td>
Rice Status Indicator (RSI)
</td> </tr>
<tr>
<td>
PUC2_DB1_b
</td>
<td>
Timeseries of rice maps at 10 m spatial resolution
</td>
<td>
Paddy Rice Mapping (PRM)
</td> </tr>
<tr>
<td>
PUC2_DB2_a
</td>
<td>
Meteorological observation
</td>
<td>
KMA Open API
</td> </tr>
<tr>
<td>
PUC2_DB2_b
</td>
<td>
Weather forecast
</td>
<td>
KMA Open API (No longer considered)
</td> </tr> </table>
### 4.2.2 PUC2 Data Summary Information
Table 6: PUC2 Data Summary
### 4.2.3 PUC2 Products
Regarding PUC2 some requested datasets, originally indicated, have been
modified in accordance with [AD3] – below is the resulting product list.
**D1.4 V1.2**
<table>
<tr>
<th>
PUC2_DB3
</th>
<th>
In field inspection data
</th>
<th>
NA
</th> </tr>
<tr>
<td>
PUC2_DB4
</td>
<td>
Farmers’ claims data
</td>
<td>
NA
</td> </tr>
<tr>
<td>
PUC2_DB5
</td>
<td>
Accurate yield statistics (*)
</td>
<td>
Rice Yield Estimation (RYE)
</td> </tr>
<tr>
<td>
PUC2_DB6
</td>
<td>
removed
</td>
<td>
</td> </tr>
<tr>
<td>
PUC2_DB7
</td>
<td>
Statistical data on national
fertilizer usage
</td>
<td>
</td> </tr>
<tr>
<td>
PUC2_DB8
</td>
<td>
Social media
</td>
<td>
Tweet datasets/TD (tbc)
</td> </tr> </table>
Table 7: PUC2 Products
(*) Ultimate product is considered “high risk high gain”. The product refers
to the national scale rice yield estimation for South Korea. Since, ground
truth data, such as fertilization usage, cultivating practices, high
resolution meteorological or soil data, are not freely available we will only
make use of Sentinel data. We will attempt to use advanced machine learning
and regression techniques for the estimation of yield based on multi-year time
series of Sentinel data, correlated with the freely available district-level
yield statistics. Nonetheless, the accuracy and overall usefulness of the
ultimate product is not guaranteed.
**Dx.x**
## 4.3 PUC3: Monitoring Climate Change
### 4.3.1 PUC3 Requested Datasets
<table>
<tr>
<th>
**Use case D-**
**id**
</th>
<th>
**Dataset Description**
</th> </tr>
<tr>
<td>
PUC3_DC1
</td>
<td>
Snow cover observations
</td> </tr>
<tr>
<td>
PUC3_DC2
</td>
<td>
Ground (soil) temperature data
</td> </tr>
<tr>
<td>
PUC3_DC3
</td>
<td>
Air temperature at 2 m height
</td> </tr>
<tr>
<td>
PUC3_DC4
</td>
<td>
Snow accumulation maps
</td> </tr>
<tr>
<td>
PUC3_DC5
</td>
<td>
Climatological data for meteorological observations
</td> </tr>
<tr>
<td>
PUC3_DC6
</td>
<td>
Social media
</td> </tr>
<tr>
<td>
PUC3_DC7
</td>
<td>
Climate change scenario projections
</td> </tr>
<tr>
<td>
PUC3_DC8
</td>
<td>
Weather observation time-series
</td> </tr>
<tr>
<td>
PUC3_DA9
</td>
<td>
Numerical weather prediction model forecasts
</td> </tr>
<tr>
<td>
PUC3_DC10
</td>
<td>
Region and municipality borders
</td> </tr>
<tr>
<td>
PUC3_DC11
</td>
<td>
Herding area borders
</td> </tr>
<tr>
<td>
PUC3_DC12
</td>
<td>
Road maintenance classification
</td> </tr> </table>
Table 8: PUC3 Requested Datasets
**1.4**
### 4.3.2 PUC3 Data Summary Information
Table 9: PUC3 Data Summary
### 4.3.3 PUC3 Products
<table>
<tr>
<th>
Dataset ID.
</th>
<th>
Product Name / acronym
</th>
<th>
Web Location
</th> </tr>
<tr>
<td>
PUC3_DC1
</td>
<td>
GlobSnow Snow Water Extent (GlobSnow_SWE)
</td>
<td>
http://nsdc.fmi.fi/data/data_globsnow_swe
</td> </tr>
<tr>
<td>
PUC3_DC2.1
</td>
<td>
SMOS Level 3 Soil Freeze and Thaw Service (SMOS_L3FT)
</td>
<td>
http://nsdc.fmi.fi/data/data_smos
</td> </tr>
<tr>
<td>
PUC3_DC2.2
</td>
<td>
Sentinel-3 SLSTR Level-2 LST (Sentinel3_LST)
</td>
<td>
https://sentinels.copernicus.eu/web/sentinel/userguides/sentinel-3-slstr/product-
types/level-2-lst
</td> </tr>
<tr>
<td>
PUC3_DC3
</td>
<td>
Air Temperature (FMIClimGrid_Tair)
</td>
<td>
https://etsin.fairdata.fi/dataset/63b58d1a-dc23-
44eb-87e6-d3c31b9a57f9
</td> </tr>
<tr>
<td>
PUC3_DC4
</td>
<td>
Snow depth
(FMIClimGrid_Snow)
</td>
<td>
https://etsin.fairdata.fi/dataset/d72b6068-9ff24e82-90b3-057d145a274f
</td> </tr> </table>
**1.4**
<table>
<tr>
<th>
PUC3_DC5
</th>
<th>
Climatological data for meteorological observations
(AWS_CLIM_FIN)
</th>
<th>
https://en.ilmatieteenlaitos.fi/open-data-setsavailable
</th> </tr>
<tr>
<td>
PUC3_DC6
</td>
<td>
Tweet collections
</td>
<td>
</td> </tr>
<tr>
<td>
PUC3_DC7
</td>
<td>
Climate change scenario projections (CCS_FIN)
</td>
<td>
https://en.ilmatieteenlaitos.fi/open-data-setsavailable
</td> </tr>
<tr>
<td>
PUC3_DC8
</td>
<td>
Historical and current weather observations from Finnish automatic weather
stations (AWS_OBS_FIN)
</td>
<td>
https://en.ilmatieteenlaitos.fi/open-data-setsavailable
</td> </tr>
<tr>
<td>
PUC3_DC9
</td>
<td>
HIRLAM Weather Forecast (HIRLAM_NWP)
</td>
<td>
https://en.ilmatieteenlaitos.fi/open-data-sets-
available
</td> </tr>
<tr>
<td>
PUC3_DC10
</td>
<td>
Finnish regions and municipalities
</td>
<td>
https://kartta.paikkatietoikkuna.fi/?lang=en
</td> </tr>
<tr>
<td>
PUC3_DC11
</td>
<td>
Reindeer herding districts
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
PUC3_DC12
</td>
<td>
FTIA Road Maintenance
Classification
(FTIA_RoadMC)
</td>
<td>
https://julkinen.vayla.fi/oskari/
</td> </tr> </table>
Table 10: PUC3 Products
## 4.4 EOPEN Products - FAIR (Findable, Accessible, Interoperable, Re-usable)
## characteristics and LTDP needs
In the following EOPEN Products FAIR characteristics are briefly summarized.
They have been derived, based on the EC DMP guidelines - as an example,
Twitter datasets analysis is reported in Appendix C.
EOPEN Products are normally open data, also, Copernicus DWH data used in the
platform is regulated through ESA Data User Licence (see Appendix A). The FMI
research Data Policy is illustrated in Appendix B. The data policy for
Copernicus Emergency Management Service and Land Service products accessed
through the platform are presented in the Appendix D2.
EOPEN Product specifications, including accessibility and archiving, are
reported in sections 5 and 6.
### 4.4.1 PUC1 products vs FAIR characteristics
PUC1 products are mainly open, derived from Copernicus DWH open data or
available through Land Services and from FMI.
Products generated in the platform for the stakeholder (AAWA) usage or
generated by AAWA by exploiting EOPEN resources are also open, as stemming
from a public authority; hereafter some information related to the AAWA
product, namely the AAWA AMICO Early Warning System Flood Forecast (ref.
6.2.5) codification and accessibility is provided.
Normally AAWA (AMICO) products follow AAAWA internal codification as well as
European codification (for basin, river): metadata can be generated upon
request - they include information on the environment and date of generation;
search keywords can be provided if needed.
AAWA products will be made available and open (openGL license) in the platform
without time limitations; moreover, they will be also available through AAWA
website in a dedicated repository; several formats are available, namely, PDF,
geo-tiff and ASCII, as needed in GIS software.
Open readable format (not compiled) and standard vocabulary make them re-
usable.
### 4.4.2 PUC2 products vs FAIR characteristics
PUC2 products are mainly open, either derived from Copernicus DWH data or from
the Korean Meteorological Service; some crop data collected locally can be
proprietary data.
In-field inspection data is open only for domestic researchers; Korean land
cover data can be accessed only by logging in to the Ministry of Environment
web site (one needs Korean phone number for user authentication); farmers’
claims data are not accessible; Korean rice yield data is open access, but
only city level in-situ data; statistical data on national fertilizer usage is
available only at national level.
PUC2 products, namely, Rice Status Indicators and Paddy Rice mapping (ref.
5.2) are FAIR compliant.
### 4.4.3 PUC3 products vs FAIR characteristics
PUC3 products are normally open data provided by FMI; they are available from
three different sources: FMI Open Data; GlobSnow from the Sodankylä National
Satellite Data Centre, and gridinterpolated temperature and snow depth maps.
The GlobSnow and the SMOS level 3 Freeze/thaw products are officially
unlicensed open data and their usage is unrestricted. The FTIA road
maintenance classification is open and unrestricted data, and can be obtained
from the FTIA Open Data API. The datasets are provided as NetCDF archives. At
the moment, these datasets are available from an ftp server, but, in the near
future they can also be obtained from the GeoServer. If needed, metadata for
these datasets can be obtained from the Sodankylä National Satellite Data
Centre portal and later from the GeoServer.
The grid-interpolated maps are part of the FMI ClimGrid dataset and can be
obtained from the Paituli data portal (https://etsin.fairdata.fi/). Metadata
for these can also be obtained from the portal. They adhere to the Creative
Commons Attribution 4.0 International (CC-BY 4.0) license similarly to the FMI
Open Data.
All these datasets can be re-used and modified in the EOPEN platform by third
parties. They are fully compliant with the FAIR paradigm.
### 4.4.4 Twitter products vs FAIR characteristics
Twitter data product FAIR characteristics are reported in Appendix C.
### 4.4.5 EOPEN LTDP data needs
As discussed in 1.2 all data collected and made available in the platform
relates to user applications. EOPEN LTDP needs have therefore been analysed
with reference to PUCs.
Based on the data the following guidelines have been established.
<table>
<tr>
<th>
**Data description**
</th>
<th>
**Category**
</th>
<th>
**FAIR and LTDP considerations**
</th> </tr>
<tr>
<td>
Data from Copernicus sources (Data Warehouse, Thematic Services, Emergency
Management Services) that are free to access and managed by Copernicus.
</td>
<td>
1
</td>
<td>
At source. No action required in EOPEN other than to identify the data
</td> </tr>
<tr>
<td>
Data from other services or EC projects providing open data
</td>
<td>
2
</td>
<td>
At source. No action required in EOPEN other than to identify
</td> </tr>
<tr>
<td>
Public domain non-EO data sets
</td>
<td>
3
</td>
<td>
Evaluation of the access rights and utilisation required
</td> </tr>
<tr>
<td>
Non-Public domain non-EO data sets
</td>
<td>
4
</td>
<td>
Evaluation of the access rights and utilisation required
</td> </tr>
<tr>
<td>
Data generated in EOPEN and shared to all users
</td>
<td>
5
</td>
<td>
Responsibility of EOPEN
</td> </tr>
<tr>
<td>
Data generated in EOPEN by stakeholders (Use
Cases)
</td>
<td>
6
</td>
<td>
Responsibility of the stakeholder
</td> </tr>
<tr>
<td>
Proprietary datasets
</td>
<td>
7
</td>
<td>
Restricted and dependent on the stakeholder
</td> </tr> </table>
Normally EO data from Copernicus DWH are not archived (only metadata are
saved); also, LTDP is not considered in the case of non-EO data provided by
FMI and coming from Twitter; in particular, in the case of FMI, they have
already in place their own procedures whereas CERTH is taking care of twitter
archiving.
LTDP needs are limited to PUC1 (AAWA) which is exploiting the platform
resources for generating some products of theirs although archiving is also
foreseen at their premises.
About 250 GB have been estimated for LTDP of high resolution EO data, obtained
from Copernicus DWH, in limited amounts, under Quota restrictions (including
possibly Copernicus EMS data-products on flood event occurrence during the
second half of the project as rush datasets) as well as weather forecast data
(not archived at FMI) and AAWA products, generated on the platform as EOPEN
Products.
# 5 EOPEN PRODUCTS GENERATED IN THE PLATFORM
EOPEN products generated in the platform refer, in particular, to PUC1 and
PUC2 as PUC3 products are provided by FMI (ref. 6.2.1).
PUC products specifications are presented through a table such as that shown
below, suitable for describing Geospatial layers.
<table>
<tr>
<th>
**Dataset ID.**
</th>
<th>
</th> </tr>
<tr>
<td>
**Dataset Name**
</td>
<td>
</td> </tr>
<tr>
<td>
**Purpose**
</td>
<td>
</td> </tr>
<tr>
<td>
**Description (Content Specification)**
</td>
<td>
</td> </tr>
<tr>
<td>
**Output Layers**
</td>
<td>
</td> </tr>
<tr>
<td>
**Measurement Unit**
</td>
<td>
</td> </tr>
<tr>
<td>
**Temporal/spatial applicable domains** 1
</td>
<td>
</td> </tr>
<tr>
<td>
**Temporal coverage**
</td>
<td>
</td> </tr>
<tr>
<td>
**Spatial Coverage / Area**
</td>
<td>
</td> </tr>
<tr>
<td>
**Spatial Resolution / Scale (Data Grid)**
</td>
<td>
</td> </tr>
<tr>
<td>
**Geographic projection /** **Reference system**
</td>
<td>
</td> </tr>
<tr>
<td>
**Input Data/Sources**
</td>
<td>
</td> </tr>
<tr>
<td>
**Input Data Archiving and rolling policies**
</td>
<td>
</td> </tr>
<tr>
<td>
**Frequency of update (refresh rate)**
</td>
<td>
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
</td> </tr>
<tr>
<td>
**Naming convention**
</td>
<td>
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
</td> </tr>
<tr>
<td>
**Data sharing**
</td>
<td>
</td> </tr>
<tr>
<td>
**Standards and metadata**
</td>
<td>
</td> </tr> </table>
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1387_ERSAT GGC_776039.md
|
# INTRODUCTION
This document constitutes the first issue of Deliverable 1.2 “D1.2 Data
Management Plan (DMP)” in the GSA framework of the project titled “ERSAT
Galileo Game Changer”.
This document has been prepared to describe the data management life cycle for
all data sets that will be collected, processed or generated by ERSAT GGC
project. It is a document outlining how research data will be handled during
and after the project is completed. It describes what data will be collected,
processed or generated and what methodologies and Standards are to be applied.
It also defines if and how this data will be shared and/or made open, and how
it will be curated and preserved. The DMP is not a fixed document; it is to
evolve and gain more precision and substance during the lifespan of the
project. According to “Grant Agreement 776039”, the first version of DMP is to
be delivered in month 6 of the project. DMP is a living document and it will
be updated by the Project Coordinator on the basis of the project needs. In
order to maintain updated the DMP, during the project meetings, the Project
Coordinator will insert in the agenda a session dedicated to the DMP, to be
used to collect feedback and to further expand the list of data that ERSAT GGC
project will make available. The scope of the first version is to present an
overall approach towards processing of the data produced within the framework
of the project.
In Chapters 2-5, main objectives of ERSAT GGC and Data Management Plan are
introduced. Chapter 6 will briefly describe the Cooperation Tool being the
internal online storage platform. The different categories of data together
with the selection of open data, its preservation and data sharing are
described in Chapter 7. Chapter 8 finally explains the required resources and
their related allocation.
This document should be considered complementary to other two ERSAT GGC
deliverables “D1.1 Quality Plan” and “D7.1 Dissemination Plan” containing
additional information and details on the data management defined and applied
in ERSAT GGC project.
# OBJECTIVES OF ERSAT GGC PROJECT
ERSAT GGC project is conceived for speeding up the certification process of
EGNSS assets according to the ERTMS rules. It is a 24-month follow up of the
ERSAT (ERtms + SATellite) EAV program launched in 2015 by GSA, technically led
by RFI and Ansaldo STS for integrating satellite technologies on the ERTMS
platform. Primary goals of ERSAT GGC are:
1. to accelerate the standardization process at European level for including the satellite requirements into the new ERTMS STI (Standard for Technical Interoperability), by delivering a certified enhanced functional ERTMS + SAT architecture with proper functional and not functional test specification;
2. to define and test a certified standard process and tools for classifying track areas for locating virtual balises;
3. and to allow RFI, recently nominated Game Changer for integrating satellite technology into ERTMS, to launch an operational line by 2020, the same year Galileo services will be operational.
As a results of previous projects, EGNSS and ERTMS – both pillars of the
European industrial policy - are becoming tightly intertwined and backed by a
mutual-supportive business model - a pre-requisite to give birth to fast
growing EGNSS applications and to the deployment of economical sustainable
ERTMS solutions on local and regional market which today is dominated by
legacy systems most of which old and still manually operated.
ERSAT GGC will rely on the achievements of the most relevant EC and GSA funded
projects such as NGTC, ERSAT EAV, STARS, RHINOS, whose the individual
coordinators are partners of the ERSAT GGC consortium.
The project will exploit a fully operational Test bed located in Sardinia on
the 50km double track line between Cagliari and San Gavino. This Test Bed
includes the ERTMS main constituents, a fully equipped train, and the EGNSS/
Wireless assets to operate with the ERTMS system based on the Satellite based
Location Determination System (LDS). Other facilities, such as those of the
CEDEX laboratory, DLR and RFI will be exploited for the certification process.
The ERSAT GGC consortium includes RFI, SNCF and ADIF as the main European rail
stakeholders and two independent Notify body, Italcertifer and Bureau Veritas
which are already supporting RFI for the certification process.
# ERSAT GGC GRANT AGREEMENT
According to Article 29.3 “Open access to research data” of the ERSAT GGC
Grant Agreement (GA N° 776039), all the ERSAT GGC beneficiaries must:
1. deposit in a research data repository and take measures to make it possible for third parties to access, mine, exploit, reproduce and disseminate – free of charge for any user – the following:
1. the data, including associated metadata, needed to validate the results presented in scientific publications as soon as possible
2. other data, including associated metadata, as specified and within the deadlines laid down in the data management plan
2. provide information – via the repository – about tools and instruments at the disposals of the beneficiaries and necessary for validating the results (and – where possible – provide the tools and instruments themselves)
This does not change the obligation to protect results in Article 27, the
confidentiality obligations in Article 36, the security obligations in Article
37 or the obligations to protect personal data in Article 39, all of which
still apply.
As an exception, the beneficiaries do not have to ensure open access to
specific parts of their research data if the achievement of the action’s main
objective would be jeopardised by making those specific parts of the research
data openly accessible. In this case, the data management plan must contain
the reason for not giving access.
# INFORMATION MANAGEMENT AND POLICY
Information Management is the discipline by which information is managed
within an organisation including acquisition, custodianship, distribution and
disposal. Information Management and related Governance have emerged as key
concerns for organisations in today’s environment of “Big Data” and increased
data storage needs, Cyber Security and higher information risks, and greater
compliance to regulatory and legal demands.
ERSAT GGC partners assumed that the data archiving for the project will be
hosted within a cooperation tool external to any specific corporate
Information Management System. However, the processes and the management of
information are designed according to the Consortium Agreement [2], already
signed by all the partners involved.
The Consortium Agreement regulates the ownership and transferring of results
(§8), the access rights (§9), non-disclosure of information (§10) and the
minimal measures of Cyber Security required (§11).
As a brief summary, the policies agreed foresee that results are owned by the
Partner who generated them. In case the contributions are so interdependent
that they are not separable, the partners will apply Consortium Agreement
§8.2. Transferring of results were anticipated as much as possible, however
the partner who wants to transfer results to other bodies has to inform the
other partners in advance.
All the partners have access rights to the contents of the projects, and to
all the background knowledge each partner has agreed to share upon signing
(Attachment 1 to the Consortium Agreement). Access rights for Affiliate
entities are regulated in §9 of Consortium Agreement.
Confidential information communicated during the project shall be kept and
managed as confidential for 4 years after the end of the Project.
In the end, each partner shall ensure the security of the computer system used
to perform the services and activities covered by this Consortium Agreement,
including activities such as transmission, reception, retention and sharing of
all documentation arising from the performance of this Consortium Agreement.
# PROJECT DMP ORGANIZATION AND IMPLEMENTATION
ERSAT-GGC project is coordinated by RFI and managed through the Technical
Management Team. The project has a structured governance and management
framework that controls and directs decisions during the project. This is
organised as shown in the figure below:
Figure 1 Organizational structure
The DMP is issued as project deliverable D1.2 under the work package 1, and
will be administrated under the governance structure within the Technical
Coordination as shown in the figure above.
# ERSAT GGC DATA STORAGE AND BACKUP
The storage and maintenance of data produced within ERSAT GGC project will be
handled on a platform that offers services and support specifically in
management of Horizon2020 projects. Cooperation Tool
(https://www.cooperationtool.eu/projects/) - an easily accessible and
userfriendly platform – aims at supporting the technical work of the
consortium members and keeping track of activities that will take place within
the whole duration of the project. In addition, the Cooperation Tool serves as
a storage place of all material planned to be collected, generated or analysed
throughout life of the project. Access to the content varies among
beneficiaries and is provided based on partners’ involvement in different WPs.
The main functions in Cooperation Tool – relevant for the data management are
listed below:
* Data storage
* Exchange of data and information: upload and download documents (deliverables, minutes of meetings, agendas, presentations, Technical Annex, contact list, etc.)
# ERSAT GGC DATA SUMMARY
During the life of ERSAT GGC project, large amount of data will be collected
and processed. In addition, new data is expected to be generated. Consortium
foresees to collect, process and produce various categories of data.
Please note that list of data presented in the next chapters is not exhaustive
and will be populated/adapted during the project development or in case
needed.
The following three categories of data will be managed:
* Input data;
* On going data;
* Output data.
The project will exploit data coming from several previous projects about
EGNSS applicability to ERTMS: those could be architectural designs and
information, as well as requirements collected from different stakeholders in
the Railway framework including the specifications and guidelines of ERTMS,
provided by Railways international institutions (e.g. ERA, CENELEC, UNISIG).
Those data will allow the synthesis and a final certified issue of the
Functional ERSAT Architecture, which will be the reference for the new
interoperable standard.
Data coming from tests on EGNSS quality of service, performed in the last
years within all the related projects, and from the ERSAT GGC itself, will be
the input for the severe hazard analysis required to certify the solution and
to identify a migration strategy for upgrading existing ERTMS lines. On going
EGNSS and environmental data collected by track and lab campaigns in the frame
of WP4 activities (Track Survey and Track Classification) will allow not only
the definition of a standard procedure for track classification for locating
the virtual balises, but they will be reusable by new projects to design and
assess new features of ERSAT system covering EGNSS vulnerabilities.
Certifiable tests and results will be produced by WP6, generating data
verifiable by European Railway and GNSS institutions and communities.
## “FAIR” Data
All the data and metadata in ERSAG GGC will be handled according to the FAIR
(findable, accessible, interoperable and reusable) principle.
<table>
<tr>
<th>
FINDABLE
</th>
<th>
Project outcomes of different type will be uploaded on ZENODO repository (see
Chapter 7.5), with a searchable resource.
Data and metadata are assigned a unique and persistent identifier.
</th> </tr>
<tr>
<td>
ACCESSIBLE
</td>
<td>
All the public data and metadata will be accessible on the project website.
Confidential data are retrievable by their identifier using a standardized
communications protocol
</td> </tr>
<tr>
<td>
INTEROPERABLE
</td>
<td>
Data and metadata use a formal, accessible, shared, and broadly applicable
language for knowledge representation.
</td> </tr>
<tr>
<td>
REUSABLE
</td>
<td>
The data collected and generated during the lifetime of the project will be
useful to different typology of stakeholders such as:
* ERSAT GGC consortium;
* European GNSS Agency;
* European Commission services and European Agencies;
* EU National Bodies;
* The general public including the broader scientific community
* Stakeholders
* Future H2020 (including S2R) projects.
</td> </tr> </table>
## Collection And Management Of Data Set
The project foresees several activities related to analysis and reporting, and
a significant activity of developing and testing of tools, especially
regarding WP4 and WP6. Hence, data are both documents and simulations.
This paragraph lists input, ongoing and output data for each WP, as they are
expected at the release date of this DMP (Month 6 of the project). Each WP
leader is in charge of defining and describing the data set of their own WP,
and updating during the project while important changes occur, by informing
RFI as Project Coordinator.
For WP relations please refer to §5 of this document.
<table>
<tr>
<th>
WP1- Project Coordination and Management
</th>
<th>
WP Leader: RFI
</th> </tr>
<tr>
<td>
Input Data
</td>
<td>
On Going Data
</td>
<td>
Output Data
</td> </tr>
<tr>
<td>
Data collected from the
Contractual Official documents,
i.e. Grant Agreement and official ERSAT GGC Proposal.
</td>
<td>
Data collected as documentation of the running work of each partner of the
Consortium, and data related to the schedule evolution.
</td>
<td>
Project management data
related to the quality of the projects outputs and deliverables, and the
management of the data themselves among all the project activities.
</td> </tr> </table>
<table>
<tr>
<th>
WP2- Enhanced ERTMS Specifications and Architecture
</th>
<th>
WP Leader: ASTS
</th> </tr>
<tr>
<td>
Input Data
</td>
<td>
On Going Data
</td>
<td>
Output Data
</td> </tr>
<tr>
<td>
Data collected from the
Contractual Official documents,
e.g. Grant Agreement, Consortium Agreements, etc., and from all the previous
projects related to
ERTMS + Satellite: ERSAT EAV, NGTC, X2Rail2, SBS, STARS, RHINOS, etc.
Data coming from the other WPs, e.g. preliminary Hazard Analysis from WP3.
Data collected from the ERTMS specification documents and CENELEC standards,
i.e.
parameters, thresholds, margins, etc.
</td>
<td>
Data contained in the intermediate Technical Notes generated in the
WP (data related to the
Enhanced Functional
ERTMS Architecture,
ERTMS Operational
Scenarios, ERTMS Mission Profile, and Test Specification).
</td>
<td>
Technical documents that constitute the deliverables
(Enhanced Functional ERTMS Architecture and related Test Specification).
</td> </tr> </table>
<table>
<tr>
<th>
WP3- Safety and Hazard Analysis
</th>
<th>
WP Leader: RINA-C
</th> </tr>
<tr>
<td>
Input Data
</td>
<td>
On Going Data
</td>
<td>
Output Data
</td> </tr>
<tr>
<td>
Data collected from the
Contractual Official documents,
e.g. Grant Agreement, etc., and from previous projects related to ERTMS +
Satellite, e.g. NGTC. Data coming from the other WPs, e.g. the Enhanced
Functional
ERTMS Architecture and ERTMS Operational Scenarios from WP2. The ERTMS system
requirement specification and the guidelines coming from CENELEC standards for
Railways Applications.
</td>
<td>
Data contained in the drafted deliverables generated by the WP activities.
</td>
<td>
Technical Analysis and relative Reports that constitutes the WP deliverables.
</td> </tr> </table>
<table>
<tr>
<th>
WP4- Track Survey and Track Classification
</th>
<th>
WP Leader: DLR
</th> </tr>
<tr>
<td>
Input Data
</td>
<td>
On Going Data
</td>
<td>
Output Data
</td> </tr>
<tr>
<td>
Data collected from the Contractual Official documents, i.e. Grant Agreement,
and from all the previous projects related to
ERTMS + Satellite: ERSAT EAV, NGTC, Shift2Rail, STARS, etc.
</td>
<td>
Information acquired and measured for characterizing the EGNSS SIS with
respect to disturbing phenomena, used to classify the track areas as suitable
or not suitable for placing virtual balises.
The following is a preliminary list:
* Receiver observations in a proprietary format, to be converted into RINEX-format messages. The receiver data should include at least the following measurements: carrierto-noise ratio, pseudorange and carrier phase measurements using GPS C/A code signal at L1 frequency and as raw measurement without any iono or tropo model corrections.
* When allowed by the receiver, observations as above but obtained by utilizing further GNSS constellations, for example:
* GPS L5.
* GLONASS L1 o Galileo E1 and E5.
* Power spectral density.
* If the spectrum analyser allows, snapshots of raw sample data.
* Video or image data may be acquired (e.g. fish-eye camera
pictures)
* In-phase and Quadrature-phase correlator outputs.
* GNSS samples collected at non-zero IF frequency by RPS (Receive and Playback of Signals) equipment.
* GNSS augmentation data.
* Constellation almanac data and navigation messages (ephemeris data).
* Receiver settings, e.g. elevation mask angle.
</td>
<td>
The data produced in WP4
will be defined during WP4.1 and WP4.2. A
preliminary list of the possible generated data is:
* Track Area
Classification Data: It consist of information about the classification
category (i.e., red, yellow, green) of the different areas of the railway
track network under consideration.
* Information about the track area classification assumptions and conditions to be taken into account for the railway virtual balise transmission system.
Other WP output data will be the Technical Analysis and relative Reports that
constitutes the WP deliverables.
</td> </tr> </table>
<table>
<tr>
<th>
WP5- Assessment of Enhanced ERTMS architecture and of Survey Process and
related toolset
</th>
<th>
WP Leader: BVI
</th> </tr>
<tr>
<td>
Input Data
</td>
<td>
On Going Data
</td>
<td>
Output Data
</td> </tr>
<tr>
<td>
Data collected from the Contractual Official documents, i.e. Grant Agreement,
and as the
Output of all the previous
WPs
</td>
<td>
Text type documents containing
* Issue of technical notes and requests for clarifications on the Input documents.
* Minutes and results of technical meetings
</td>
<td>
Official Assessment documents containing
* Evaluation of the activities performed by the partners along the applicable phases of the life cycle defined in the
EN50126
* Evaluation of the functional architecture defined according to what foreseen in WP 2.1 and amended on the grounds of the results obtained at the end of the activities performed according to the following WP (i.e. 3.1, 3.2)
* Evaluation of the test specification defined according to what foreseen in
WP 2.2 and amended according to the modifications introduced in the functional
architecture as in the above bullet
* Evaluation of the safety analysis carried out according to what foreseen in WP 3.x
* Evaluation of the Survey Process for Classifying the Track Areas resulted from WP4
* Evaluation of the toolset to be used for the evaluation of the
Track Areas as in the previous WP 5.2
</td> </tr> </table>
<table>
<tr>
<th>
WP6- Demonstration
</th>
<th>
WP Leader: RadioLabs
</th> </tr>
<tr>
<td>
Input Data
</td>
<td>
On Going Data
</td>
<td>
Output Data
</td> </tr>
<tr>
<td>
The WP needs of the data inputs specified in the following documents:
* A prototype of the track survey toolset designed, developed,
tested and evaluated in WP4 e WP5;
* Documents: D4.2 “Technical Specification of Survey Toolset” and D4.3 “Prototype
Implementation of the Survey Toolset” from WP4, as guide to
use the toolset;
* Documents: D4.1 “Procedure Specification Document; D4.4
“Measurement Campaign Report”, D4.5 “Process Execution Report” from WP4, to
plan and execute the track survey demonstration;
* Document D.7 “Technical Report about system integration in RFI laboratory” from WP4 to plan and execute the integration in RFI laboratory.
</td>
<td>
This WP will collect the same kind of data from the track survey and from the
simulators that will be configured during the project, according to the Grant
Agreement.
</td>
<td>
Technical Analysis and relative Reports that constitutes the WP deliverables.
</td> </tr> </table>
<table>
<tr>
<th>
WP7- Exploitation and Dissemination
</th>
<th>
WP Leader: UNIFE
</th> </tr>
<tr>
<td>
Input Data
</td>
<td>
On Going Data
</td>
<td>
Output Data
</td> </tr>
<tr>
<td>
Data collected from the Contractual Official documents, i.e. Grant Agreement,
and from the official ERSAT GGC Proposal.
</td>
<td>
Collect information related to the dissemination of the project performed by
the partners of the Consortium in events, presentations and publication.
</td>
<td>
Project templates and Logo created to represent the project and unify the
documentation of the project such as deliverables, agendas, Minutes of
Meetings and slides for presentations. Dissemination material used to present
and make stakeholders aware of the project in events and presentations.
</td> </tr> </table>
7.2.1 _Expected size of the data_
Significant size of data is expected for the track-surveys data collection and
the demonstration activities (WP4 and WP6). As a reference number, the average
size of acquired RF data for 50km of run is 120GB (minimum 90GB – maximum
155GB), according to tests performed in the STARS project. Raw data from the
receiver amount to 300MB for the same track.
The size will be scaled according to the number of surveys, the length of the
tracks and the number of simulations performed. More details will be provided
once the test plan will be defined. Dedicated mass storage shall be foreseen.
## Data Set Reference And Name
File naming will be coherent with the coding structure defined in the ERSAT
GGC Quality Plan (Ref 3), §5.2.2 “Document Management in the Cooperation
Tool”.
The identification code contains the six following sections:
[PROJECT] – [DOMAIN] – [TYPE] – [OWNER] – [NUMBER] – [VERSION]
where:
* [Project] is GGC for all ERSAT-GGC documents; o [Domain] is the relevant domain in the Cooperation Tool (WP, Task or project body); o [Type] is one letter defining the document category; o [Owner] is the trigram of the deliverable leader organisation;
* [Number] is an order number allocated by the Cooperation Tool when the document is first created; o [Version] is the incremental version number, automatically incremented at each upload.
## Data Security And Integrity
All the data and documents handled and produced within ERSAT GGC project will
be stored in the Cooperation Tool (refer to the Quality manual for further
information). For access to the project repository, users are provided with a
username (e-mail address) and password. The access to the Cooperation Tool and
the related rights are set by the Project Coordinator (RFI) with the support
of RINA-C.
## Data Sharing
The ERSAT GGC consortium is committed to the mandate for open access of
publication in the H2020 programme as well as to the participation in the
Pilot for Open Research Data.
The public website ( _http://www.ersat-ggc.eu/_ ) includes a sharing area,
where public documents/deliverables will be stored.
According to the accessibility of all data set to the scientific community,
the ERSAT GGC Consortium will also use ZENODO ( _https://zenodo.org/_ ) as
another data repository and scientific publication for project outcomes. Not
protected scientific results that are useful for public use will be uploaded
in ZENODO repository accordingly. All uploaded material will be made available
free of charge.
The list of the documents can be further expanded in case there is a need for
that. At this stage the following documents were identified:
* Electronic copies of the final version manuscript accepted for publication;
* Publications made available immediately by open access publishing. Publications that past embargo period;
* Public deliverables;
* Public summaries of the confidential deliverables (only in case of agreement of public summary for a confidential deliverable);
* All external dissemination material;
* Videos/audio and interactive materials such as lessons;
* Research data need to validate the results presented in the publication.
ZENODO also assigns a Digital Object Identifier (DOI) to all publicly
available uploads, in order to make content easily and uniquely citable.
## Ethical Aspects
ERSAT GGC does not process personal data as a part of the research activities.
# ALLOCATION OF RESOURCES
RFI, in its role of ERSAT GGC project coordinator is responsible for
generating the first version of DMP and the following versions if needed.
Moreover, RFI is responsible for the implementation of the Data Management
Plan (DMP).
All consortium partners are responsible for the data generation as well as
data quality in the respective WPs and tasks, as included in the description
of work (DoW). According to the Consortium Agreement, each partner is in
charge of the storage and management of the data related to their activities
inside the project.
UNIFE is in charge of the website development and management, and it is -
along with RFI - the main responsible for the dissemination activities.
RINA-C is responsible for the correct management of the Cooperation Tool.
## Resources For Delivering The DMP
The main resources needed for the correct implementation of the DMP are all at
hand and easily accessible. Cooperation Tool, website and ZENODO can be
considered as the main tools to be used for the implementation of the plan.
# CONCLUSIONS
This document contains the first release of the “DMP” and represents the
status of the mandatory quality requirements at the time of deliverable D1.2
This report should be read in association with all the referenced documents,
appendix material and including the EC Grant /Consortium Agreement, annexes
and guidelines.
The report will be subject to revisions as required to meet the needs of the
ERSAT-GGC project.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1389_CHE_776186.md
|
# Executive Summary
The CHE Data Management Plan responds to the requirements of the H2020 Open
Research Data Pilot to document which research data is being produced by the
CHE project, in which format, and how it will be made available.
It has already identified data sets for work packages 1 to 4, but is only to
be seen as an initial version which requires periodic updates to provide the
necessary detail as it emerges.
# Introduction
## Background
CHE, as a Coordination and Support Action, is bringing together European
expertise and a consolidated approach to building an operational CO 2
emission monitoring capacity. CHE partners are at the forefront of
developments in the compilation of emission inventories, the observation of
the carbon cycle from ground-based and satellite measurements, the process
modelling of the carbon cycle, atmospheric transport modelling, and data
assimilation and inversion systems. There will be four main areas of work
covering: observations, emission inventories, modelling and inversion systems.
The central questions that CHE will address are:
* What does it take to have a combined bottom-up and top-down estimation system capable of distinguishing the anthropogenic part of the CO2 budget from the natural fluxes?
* How can we make the first steps towards such a system that can use the high spatial and temporal resolution of satellite observations to monitor anthropogenic emissions at the required time scales?
* And what does it take to transform a research system into a fully operational monitoring capacity?
CHE will support a large community by providing a library of realistic CO2
simulations from global to city scale to examine the capacity for monitoring
future fossil fuel emissions and to adequately dimension space mission
requirements.
## Scope of this deliverable
### Objectives of this deliverables
D7.5 Data Management Plan provides the initial outline of the data management
plan including information on which data sets will be created in the project
and how they will be made available. This document represents only the initial
version where details may not be available yet, and it will be further
developed over the course of the project.
### Work performed in this deliverable
The work performed included, as per the DoA, the collection of the available
descriptions of data sets to be produced by the project, through a
questionnaire.
### Deviations and counter measures
No deviations have been encountered.
# Open Research Data Objectives
## Open Research Data Pilot
As per the Guidelines to the Rules on Open Access to Scientific Publications
and Open Access to Research Data in Horizon 2020 1 , Research Data
“Refers to information, in particular facts or numbers, collected to be
examined and considered as a basis for reasoning, discussion, or calculation.
In a research context, examples of data include statistics, results of
experiments, measurements, observations resulting from fieldwork, survey
results, interview recordings and images. The focus is on research data that
is available in digital form.”
The Open Research Data Pilot
“aims to improve and maximise access to and re-use of research data generated
by
Horizon 2020 projects 2 ” and applies to data sets that are
“needed to validate the results presented in scientific publications 2 ”.
The Data Management Plan is expected to
“specify what data will be open: detailing what data the project will
generate, whether and how “it will be exploited or made accessible for
verification and re-use, and how it will be curated and preserved 2 “.
## CHE Research Data
As per the CHE Description of Action, the products of CHE will comprise
reports, graphical displays, datasets and improved methods, algorithms and
code. The datasets will target a wide user community to support them with
parallel or alternative studies.
All mature data products of CHE will be made publicly available to maximize
the uptake by the scientific community. This also answers the requirement of
the Call to provide a series of simulation scenarios that could serve to
adequately dimension a space mission. It is envisaged to make use of three
parallel data portals to ensure full visibility of the datasets. These data
portals will be based on the ICOS Carbon portal, the Global Carbon Atlas and
the Climate Data Store, which is currently under development by the Copernicus
Climate Change Service (C3S). The Technical Annex of the Delegation Agreement
between the European Commission and ECMWF regarding C3S explicitly mentions
that its Climate Data Store must be designed to allow for the monitoring of
climate impacts and climate drivers, including CO 2 3 . The steps
undertaken by CHE towards building a European Support Capacity for Monitoring
CO 2 anthropogenic emissions contribute directly to this operational
requirement.
Table 1 below presents the envisaged output data sets as per the Description
of Action.
**Table 1: CHE Output Datasets**
<table>
<tr>
<th>
**Context**
</th>
<th>
**Models**
</th>
<th>
**Application**
</th>
<th>
**Output Fields**
</th> </tr>
<tr>
<td>
Global
</td>
<td>
IFS,
LMDZ, TM5, TM5+OpenIFS,
CCFFDAS
</td>
<td>
Global scale at spatial resolutions of 10 km or coarser aiming at representing
the whole globe with continuous transport models/process models of surface
fluxes
</td>
<td>
CO 2 fluxes, CO 2 atmospheric concentrations, other tracers, optimal
process parameter values
</td> </tr>
<tr>
<td>
Regional
</td>
<td>
CHIMERE, COSMO,
FLEXPART, LOTUSEUROS, WRF-STILT
</td>
<td>
Regional to continental area at spatial resolution of 5 to
10 km aiming at
representing the evolution in limited-area domain with boundary conditions
</td>
<td>
CO 2 fluxes, CO 2 atmospheric concentrations, other tracers
</td> </tr>
<tr>
<td>
Cityscale
</td>
<td>
CHIMERE, COSMO, WRFSTILT, EULAG
</td>
<td>
Local targeted areas at spatial resolution of about 1 km or finer aiming at
representing detailed emissions
</td>
<td>
CO 2 fluxes, CO 2 atmospheric concentrations, other tracers
</td> </tr> </table>
## Data Management Plan Questionnaire
The following questionnaire has been provided to CHE work packages to gather
the information for this first version of the Data Management Plan.
**Table 2: Data Management Plan Questionnaire**
<table>
<tr>
<th>
** <Data set reference and name> **
</th>
<th>
</th> </tr>
<tr>
<td>
**Data set description**
</td>
<td>
_Description of the data that will be generated or collected (or is already
available to the project), its origin (in case it is collected), nature and
scale and to whom it could be useful, and whether it underpins a scientific
publication. Information on the existence (or not) of similar data and the
possibilities for integration and reuse._
_Limitations?_
_Constraints?_
</td> </tr>
<tr>
<td>
**Standards and metadata**
</td>
<td>
_Reference to existing suitable standards of the discipline. If these do not
exist, an outline on how and what metadata will be created._
_Will you generate proper metadata for you data?_
</td> </tr>
<tr>
<td>
</td>
<td>
_If yes: how do they look like?_
_If no: why?_
_Data format?_
_Will there be a review process to quality- check the data?_
</td> </tr>
<tr>
<td>
**Data Sharing**
</td>
<td>
_Description of how data will be shared, including access procedures, embargo
periods (if any), outlines of technical mechanisms for dissemination and
necessary software and other tools for enabling re-use, and definition of
whether access will be widely open or restricted to specific groups.
Identification of the repository where data will be stored, if already
existing and identified, indicating in particular the type of repository_
_(institutional, standard repository for the discipline, etc.)._
_In case the dataset cannot be shared, the reasons for this should be
mentioned (e.g. ethical, rules of personal data, intellectual property,
commercial, privacy-related, security-related)._
_License?_
_Access URL?_
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td>
<td>
_Description of the procedures that will be put in place for long-term
preservation of the data. Indication of how long the data should be preserved,
what is its approximated end volume, what the associated costs are and how
these are planned to be covered._
_At which Data Center do you want to store your data?_
_Is there an established workflow for_
_your requested DOI process in place?_
_According to which standards_
</td> </tr> </table>
# CHE Data Sets
The following sections provide the responses by work packages 1 to 4. Work
Packages 5 to 7 do not produce any data sets.
## Work Package 1
<table>
<tr>
<th>
**OCO-2**
</th>
<th>
</th> </tr>
<tr>
<td>
**Data set description**
To be generated:
Two new satellite XCO2 L2 data products from OCO-2 using two complementary
approaches.
One year of data will be generated (aim:
2015)
</td>
<td>
Based on UoL algorithm applied to OCO-2 level 2 data, employing machine
learning and filtering techniques that select highquality data that are useful
in atmospheric inversions
Based on a fast algorithm developed by
UoB (Reuter et al., 2017a/b) named FOCAL.
These data will be used in the inversions to investigate how source/sink
distributions change when satellite data are assimilated.
</td> </tr>
<tr>
<td>
**Standards and metadata**
</td>
<td>
Yes, we will use meta-data. See for example:
https://meta.icos-
cp.eu/objects/mJbBxyBUWvUxg05GwIQ2o38
Data format: netcdf4
Will there be a review process to quality- check the data: yes, users of the
data will be asked to provide feedback.
</td> </tr>
<tr>
<td>
**Data Sharing**
</td>
<td>
Data will be shared via existing data repositories like the ICOS-carbon portal
https://data.icos-cp.eu/portal/#search, or
EUDAT
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td>
<td>
At which Data Center do you want to store your data? EUDAT
Is there an established workflow for your requested DOI process in place? Yes
</td> </tr> </table>
<table>
<tr>
<th>
**Fossil fuel emissions**
</th>
<th>
</th> </tr>
<tr>
<td>
**Data set description**
</td>
<td>
These data will be used as input in the inversions.
</td> </tr>
<tr>
<td>
To be generated:
spatiotemporal fossil fuel CO₂ emission estimates
</td>
<td>
</td> </tr>
<tr>
<td>
**Standards and metadata**
</td>
<td>
Meta-data will be used
Data format: netcdf4
Will there be a review process to quality- check the data: yes, users of the
data will be asked to provide feedback.
</td> </tr>
<tr>
<td>
**Data Sharing**
</td>
<td>
Data will be shared via existing data repositories like the ICOS-carbon portal
https://data.icos-cp.eu/portal/#search, or
EUDAT
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td>
<td>
At which Data Center do you want to store your data? EUDAT
Is there an established workflow for your requested DOI process in place? Yes
</td> </tr> </table>
<table>
<tr>
<th>
**Inverted fluxes**
</th>
<th>
</th> </tr>
<tr>
<td>
**Data set description**
To be generated:
Inversions will produce spatial maps of source sink distributions.
</td>
<td>
This model output forms the basis for further analysis in WP1
</td> </tr>
<tr>
<td>
**Standards and metadata**
</td>
<td>
The protocol will specify how to use metadata in describing model output.
Data format: netcdf4
Quality checks will be performed
</td> </tr>
<tr>
<td>
**Data Sharing**
</td>
<td>
Data will be shared via existing data repositories like the ICOS-carbon portal
https://data.icos-cp.eu/portal/#search, or
EUDAT
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td>
<td>
At which Data Center do you want to store your data? EUDAT
Is there an established workflow for your requested DOI process in place? Yes
</td> </tr> </table>
## Work Package 2
<table>
<tr>
<th>
**IFS CAMS nature runs T1, T2**
</th>
<th>
</th> </tr>
<tr>
<td>
**Data set description**
</td>
<td>
Two sets (Tier 1 and Tier 2) of high resolution global simulations (9 km
resolution) will performed within WP2 to be used as boundary conditions for
the regional simulations. These data sets will also be made available to the
external community, e.g. for OSSE studies.
The simulations will run for the years 2015 and 2030 (assuming a future
emission scenario). The first data set (T1) is based on the current
configuration of the operational CAMS CO2 forecasting system, while T2 will
include improved specification of emissions and it will be based on the latest
NWP model version. T2 simulations will also be run in ensemble mode at lower
resolution to account for uncertainties in the emissions and transport
processes. There is also a plan to perform case studies for periods of special
interest, e.g. field experiments, where different models could be inter-
compared.
Note that because this data set is modelbased only, it will be affected by
systematic errors associated with emissions and model transport.
</td> </tr>
<tr>
<td>
**Standards and metadata**
</td>
<td>
Metadata will be included in the grib header and ECMWF data catalogue.
The data will be available in grib format, which can be easily converted to
NetCDF using a variety of tools (cdo, ECMWF grib_api software)
The simulations will be evaluated using in situ and TCCON observations.
</td> </tr>
<tr>
<td>
**Data Sharing**
</td>
<td>
Data can be made available to partners via ftp and via the Climate Data Store
(CDS) to external users.
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td>
<td>
The data will be archived in the CDS.
</td> </tr> </table>
<table>
<tr>
<th>
**Global anthropogenic EDGAR v4.2**
</th>
<th>
**emissions**
</th>
<th>
</th> </tr>
<tr>
<td>
**Data set description**
</td>
<td>
</td>
<td>
Global emissions of CO2, CO and other trace gases used as input for Tier 1
global CO2 simulation.
</td> </tr>
<tr>
<td>
</td>
<td>
The data is available at 0.1°x 0.1° horizontal resolution as annual means for
multiple years up to 2010\.
</td> </tr>
<tr>
<td>
**Standards and metadata**
</td>
<td>
The product will be available in netCDF format compliant with CF conventions.
Metadata information is included in the global attributes of the netcdf file.
</td> </tr>
<tr>
<td>
**Data Sharing**
</td>
<td>
The data set is publicly available.
Access URL:
http://edgar.jrc.ec.europa.eu
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td>
<td>
The data set is archived at the European Joint Research Center JRC
</td> </tr> </table>
<table>
<tr>
<th>
**Global anthropogenic EDGAR v4.3**
</th>
<th>
**emissions**
</th>
<th>
</th> </tr>
<tr>
<td>
**Data set description**
</td>
<td>
</td>
<td>
Global emissions of CO2, CO and other trace gases used as input for Tier 2
global CO2 simulation for present-day and future emissions.
The data will be available at 0.1°x 0.1° horizontal resolution as annual means
for 2015 and 2030 and information for temporal disaggregation to hourly
fields.
</td> </tr>
<tr>
<td>
**Standards and metadata**
</td>
<td>
</td>
<td>
The product will be available in netCDF format compliant with CF conventions.
Metadata information will be included in the global attributes of the netcdf
file.
</td> </tr>
<tr>
<td>
**Data Sharing**
</td>
<td>
</td>
<td>
The data set will be made publicly available as part of Deliverable D2.3 of
CHE by Dec 2018.
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td>
<td>
The data set will be archived at the
European Joint Research Center JRC
</td> </tr> </table>
<table>
<tr>
<th>
**European anthropogenic emissions TNO/CAMS81**
</th>
<th>
</th> </tr>
<tr>
<td>
**Data set description**
</td>
<td>
European emissions of CO2, CO and other trace gases used as input for European
simulations in WP2 of CHE.
The data will be available at 1/16°x 1/8° horizontal resolution as annual
means for 2015 and 2030 and information for temporal disaggregation to hourly
fields.
</td> </tr>
<tr>
<td>
**Standards and metadata**
</td>
<td>
The product will be available in netCDF format compliant with CF conventions.
</td> </tr>
<tr>
<td>
</td>
<td>
Metadata information will be included in the global attributes of the netcdf
file.
</td> </tr>
<tr>
<td>
**Data Sharing**
</td>
<td>
The data set will be made publicly available as part of Deliverable D2.3 of
CHE by Dec 2018.
It will be accessible through the CAMS catalogue.
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td>
<td>
The data set will be archived at ECWMF
</td> </tr> </table>
<table>
<tr>
<th>
**European biosphere fluxes VPRM**
</th>
<th>
**from**
</th>
<th>
</th> </tr>
<tr>
<td>
**Data set description**
</td>
<td>
</td>
<td>
European biosphere-atmosphere exchange fluxes of CO2 from the Vegetation
Photosynthesis and Respiration Model (VPRM) separately for gross
photosynthetic production and respiration. The VPRM parameters will be
calibrated against CO2 flux measurements from the FLUXNET network.
The data will be available at approx. 5 km x 5 km horizontal resolution for
the year 2015.
</td> </tr>
<tr>
<td>
**Standards and metadata**
</td>
<td>
</td>
<td>
The product will be available in netCDF format compliant with CF conventions.
Metadata information is included in the global attributes of the netcdf file.
</td> </tr>
<tr>
<td>
**Data Sharing**
</td>
<td>
</td>
<td>
The data set will be made publicly available as part of Deliverable D2.3 of
CHE.
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td>
<td>
The data set is archived at ECMWF.
</td> </tr> </table>
<table>
<tr>
<th>
**European and regional**
**simulations**
</th>
<th>
**CO2**
</th>
<th>
</th> </tr>
<tr>
<td>
**Data set description**
</td>
<td>
</td>
<td>
European simulations of CO2 and CO performed with three regional models,
COSMO-GHG, WRF-GHG and LOTOS-
EUROS. LOTOS-EUROS will also provide output for reactive trace gases and
aerosols. COSMO-GHG and WRF-GHG will also provide output of meteorologial
fields.
The data will be available at approximately
5 km x 5 km horizontal and hourly temporal
</td> </tr>
<tr>
<td>
</td>
<td>
resolution for the years 2015 and 2030 (with meteorology from 2015).
Regional simulations will be conducted for two different domains, Berlin and
Beijing.
Models for Berlin: COSMO-GHG, WRFCHEM, LOTOS-EUROS
Models for Beijing: LOTOS-EUROS, WRFCHEM
Output will be at approximately 1 km x 1 km horizontal resolution except for
LOTOSEUROS (approx. 2 km x 2 km).
The output from the European and Berlin simulations will be used to generate
the synthetic Level-2 satellite observations (Deliverable D2.5).
</td> </tr>
<tr>
<td>
**Standards and metadata**
</td>
<td>
The product will be available as a collection of netCDF files compliant with
CF conventions. Metadata information will be included in the global attributes
of the netcdf files.
Detailed information on the models and simulations is given in Deliverable
D2.1 of CHE.
</td> </tr>
<tr>
<td>
**Data Sharing**
</td>
<td>
The model output will be made publicly available as Deliverable D2.4 of the
project by July 2018.
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td>
<td>
The data set will be archived at ECMWF
</td> </tr> </table>
<table>
<tr>
<th>
**Synthetic Level-2 satellite observations with realistic uncertainties**
</th>
<th>
</th> </tr>
<tr>
<td>
**Data set description**
</td>
<td>
Based on the model simulations over Europe and Berlin, synthetic Level-2
satellite observations will be generated for a suite of hypothetical satellite
orbits accounting for both random and systematic measurement errors.
The synthetic satellite data will mimic observations from a constellation of
Sentinel-7 satellites with 2 km x 2 km horizontal resolution and a swath of
approx.
250 km.
</td> </tr>
<tr>
<td>
</td>
<td>
Two different uncertainty data sets will be generated, one applying a simple
error parameterization (for complete year 2015), and a second one based on
full retrieval simulations (for selected periods in 2015).
</td> </tr>
<tr>
<td>
**Standards and metadata**
</td>
<td>
The product will be available in netCDF format following the standards for
satellite products defined in ESA's GHG-CCI project. For details see Product
Specification Document PSDv3 at http://esa-ghg-cci.org
</td> </tr>
<tr>
<td>
**Data Sharing**
</td>
<td>
The data set will be made publicly available as part of Deliverable D2.5 of
CHE.
</td> </tr>
<tr>
<td>
**Archiving and preservation**
**(including storage and backup)**
</td>
<td>
The data set is archived at ECMWF.
</td> </tr> </table>
## Work Package 3
WP3 follows a slightly different version of the questionnaire as details will
emerge at a later stage.
<table>
<tr>
<th>
**Biospheric Net Ecosystem Exchange fluxes**
</th>
<th>
**MPI- Jena – Task 3.1**
</th> </tr>
<tr>
<td>
**Available: High resolution biospheric**
**NEE fluxes for Europe**
</td>
<td>
Data assimilation of mean seasonal cycles of European flux towers (cross
validated), remote sensing and reanalysis data allowed to calculate the NEE
fluxes for Europe at 0.1degx0.1deg spatial and hourly temporal resolution.
This dataset does not include the direct land use change fluxes.
</td> </tr>
<tr>
<td>
**Available: Low resolution global NEE fluxes**
</td>
<td>
Global NEE fluxes, excluding direct land use change fluxes. This is coarser
resolution spatially 0.5degx0.5deg but halfhourly temporally. (Bodesheim et
al., 2018, ESSD); doi.org/10.5194/ **essd** -2017-130
</td> </tr>
<tr>
<td>
**Generated: Enhanced European**
**biospheric NEE fluxes**
</td>
<td>
Work in progress for Europe to obtain improved NEE fluxes (making use of
recent European flux and eddy covariance data of D. Papale).
</td> </tr>
<tr>
<td>
**Enhanced global NEE fluxes**
</td>
<td>
Work in progress for the global NEE flux to incorporate subdaily weather with
hourly ERA-5 reanalysis data.
</td> </tr>
<tr>
<td>
**Generated: Metadata/ Data sharing/ Archiving & preservation **
</td>
<td>
Metadata will be generated.
The dataset will be peer reviewed by the scientific journal in which they are
published.
</td> </tr>
<tr>
<td>
</td>
<td>
Datasets will be published with DOI number, preferably in the Earth System
Science Data journal.
</td> </tr> </table>
<table>
<tr>
<th>
**EDGAR CO2 emission gridmaps**
</th>
<th>
**JRC/ ECMWF – Task 3.2**
</th> </tr>
<tr>
<td>
**Available:**
**EDGARv4.3.2 basis emission gridmaps for GHG (1970-2012)**
</td>
<td>
Bottom-up global CO2, CH4 and N2O emission gridmaps of anthropogenic
activities, excluding large scale biomass burning and Savannah fires and Land
use, land-use change and forestry. These are annual, sector-specific data for
2012 and the years before till 1970, based on IEA statistics and other
international statistics (USGS) and IPCC (2006) emission factors. Gridding was
done on the basis of over 300 different spatial proxy datasets (e.g. point
sources of EPRTR, Open streets map, …) For 2010 also monthly emission gridmaps
are produced. Janssens-Maenhout et al., 2018, ESSD) or
doi.org/10.5194/essd-2017-79 and with dataset DOI:
https://data.europa.eu/doi/10.2904/JRC_DATASET_EDGAR.
</td> </tr>
<tr>
<td>
**Available:**
**EDGARv.4.3.2FT2016 CO2 emissions dataset (1990-2016)**
</td>
<td>
Global CO2 emission time, calculated per country and activity of fossil fuel
use and industrial processes (cement production, carbonate use of limestone
and dolomite, nonenergy use of fuels and other combustion, chemical and metal
processes, solvents, agricultural liming and urea, waste and fossil fuel
fires). Excluded are: short-cycle biomass burning (such as agricultural waste
burning), largescale biomass burning (such as forest fires) and carbon
emissions/removals of land-use, land-use change and forestry (LULUCF).
(Janssens-Maenhout et al., 2017, EUR
28766 EN report;
http://edgar.jrc.ec.europa.eu/overview.php?v=CO2andGHG1 970-2016)
</td> </tr>
<tr>
<td>
**Generated: Updated EDGARv4.3.2 emission gridmaps of CO2 with the fast track
dataset**
**EDGARv4.3.2FT2016**
</td>
<td>
Work in progress for updating the EDGARv4.3.2 gridmaps for 2012 to 2016 based
on EDGARv4.3.2FT2017 by country- and sector-specific scaling factors for each
grid cell.
</td> </tr>
<tr>
<td>
**Generated:**
**EDGARv4.3.2 emission uncertainty gridmaps (ECMWF)**
</td>
<td>
Work started for generating uncertainties of the CO2 timeseries per country
and sector. This will be translated into uncertainties of the gridmaps with
their spatial and temporal distribution assumptions on representativeness.
ECMWF can then derive a covariance matrix for the different sectors.
</td> </tr>
<tr>
<td>
**Metadata/ Data sharing/ Archiving & preservation **
</td>
<td>
Metadata are generated.
The dataset will be peer reviewed by the scientific journal in which they are
published.
</td> </tr>
<tr>
<td>
</td>
<td>
Datasets will be published with DOI number, preferably in the Earth System
Science Data journal and also downloadable from the edgar.jrc.ec.europa.eu
website.
</td> </tr> </table>
## Work Package 4
<table>
<tr>
<th>
**D4.2 Database of high-resolution scenarios of CO2 and CO emissions**
</th>
<th>
</th> </tr>
<tr>
<td>
**Data set description**
</td>
<td>
Database of high-resolution scenarios of CO2 and CO emissions associated with
anthropogenic activities in Europe over a full year including associated
uncertainty statistics and documentation.
Generated by TNO from the inventory generated in WP2. This one will be further
downscaled to 1 x 1 km2 for region EU28+NOR+CHE. A family of 10 grids will be
constructed by using the uncertainties associated with the emissions of CO2
and CO (activity data, emission factors, spatial distribution proxies and
temporal emission timing) in a Monte Carlo simulation.
The dataset will be used by the project
(T4.2 and T4.3) and can be also useful for CO2 and CO transport simulation
outside the project. Other products may exist at this scale and for this whole
domain, but with less constraint from detailed activity data. It will still be
possible to refine it further after the project.
</td> </tr>
<tr>
<td>
**Standards and metadata**
</td>
<td>
This will be decided in the first year of the project.
</td> </tr>
<tr>
<td>
**Data Sharing**
</td>
<td>
This will be decided in the first year of the project.
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td>
<td>
This will be decided in the first year of the project.
</td> </tr> </table>
# Conclusion
This initial Data Management Plan has identified a number of data sets for
each of the work packages 1 to 4, identifying the required details (where
possible) on what data will be open, how it will be made accessible, and how
it will be curated.
The Data Management Plan is to be seen as a living document and will be
reviewed and revised periodically to ensure that information contained therein
is up-to-date and correct.
**Document History**
<table>
<tr>
<th>
**Version**
</th>
<th>
**Author(s)**
</th>
<th>
**Date**
</th>
<th>
**Changes**
</th> </tr>
<tr>
<td>
0.1
</td>
<td>
Daniel Thiemert (ECMWF)
</td>
<td>
07/03/2018
</td>
<td>
Initial Version for Internal Review
</td> </tr>
<tr>
<td>
1.0
</td>
<td>
Daniel Thiemert (ECMWF)
</td>
<td>
21/03/2018
</td>
<td>
Final version after internal review
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
**Internal Review History**
<table>
<tr>
<th>
**Internal Reviewers**
</th>
<th>
**Date**
</th>
<th>
**Comments**
</th> </tr>
<tr>
<td>
Laure Brooker (ADS SAS)
</td>
<td>
15/03/2018
</td>
<td>
Approved with comments
</td> </tr>
<tr>
<td>
Denis Siméoni (TAS)
</td>
<td>
15/03/2018
</td>
<td>
Approved with comments
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
**Estimated Effort Contribution per Partner**
<table>
<tr>
<th>
**Partner**
</th>
<th>
</th>
<th>
**Effort**
</th> </tr>
<tr>
<td>
ECMWF
</td>
<td>
</td>
<td>
0.1
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
**Total**
</td>
<td>
**0.1**
</td> </tr> </table>
This publication reflects the views only of the author, and the Commission
cannot be held responsible for any use which may be made of the information
contained therein.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1390_CANDELA_776193.md
|
Executive Summary
This deliverable is the second version of the Data Management Plan (DMP) and
describes the updated DMP for the CANDELA project. The goal of this document
is to provide an overview of all datasets used by the project.
The first DMP version D5.4 [15]) follows the guidelines of the template
provided by the European Commission (Template Horizon 2020 Data Management
Plan [1] [2]). The Data Management task (T5.3) is included within the Project
Management and Coordination work package (WP5).
The second DMP has been updated including a new sub-section in chapter “2.1
FAIR data” in order to clarify the current project status concerning to the
needs for cataloguing on the data produced by CANDELA processing services.
In addition, the CANDELA project datasets (section 3) have also been updated
including detailed information for each of the project use cases. Following
the structure of DMP v1, this document includes dataset information for the
project subcases (see deliverables [3][13] and [4][14], Urbanization,
Vineyard, Forest Disaster and Forest Health). The information for each project
dataset has been provided with the same structure:
* Dataset reference
* Dataset Description
* Standards and metadata
* Data Sharing
* Archiving and preservation
Section 4 has also been updated including details of the Additional datasets
used by CANDELA following the Data Warehouse mechanism (DWH)[5].
Finally, the conclusions have also been updated including related information
about the use of data in the context of the aforementioned project use cases.
# 1 Introduction
## 1.1 Purpose of the document
The purpose of the DMP is to provide an analysis of the main elements of the
data management policy that will be used by the project [6]. Following the DMP
template, this document provides for each dataset:
* Identifier and description
* Reference to existing suitable standards
* Description about data sharing and preservation
During the project lifetime, the document needs to be updated, so it has been
planned the release of three versions of the CANDELA DMP.
* D5.4 DMP v1: the first version of the document based on the DMP template provided by the European Commission [1][2].
* D5.5 DMP v2: the DMP will evolve and will be updated in October 2019 (M18).
* D5.6 DMP v3: the final version of the document is planned in October 2020 (M30)
## 1.2 Relation to other project work
CANDELA project has planned reports about the request of Additional Datasets
thorough the Data Warehouse mechanism managed by ESA. The different versions
of the DMP will include the reference to these reports.
## 1.3 Structure of the document
This document is structured in four major chapters described below:
* **Chapter 2** presents the Data Management Plan (FAIR data principle)
* **Chapter 3** presents the CANDELA project Datasets
* **Chapter 4** presents Additional Datasets
* **Chapter 5** presents the conclusions
# 2 Data management Plan
This section has been configured following the DMP template [1][2] in order to
achieve the FAIR principle for the project datasets.
## 2.1 FAIR data
According to the DMP template, the CANDELA research data should be “FAIR”:
Findable, Accessible, Interoperable and Re-usable. This section describes the
procedures to follow the FAIR principle.
### 2.1.1 Making data findable, including provisions for metadata
The CANDELA platform is built upon the CreoDIAS basis platform. Most of the
dataset used in the context of CANDELA are or will be directly retrieved
through API provided by CreoDIAS: Sentinel 1 and 2 images, Landsat images,
other Copernicus data...
CreoDIAS provides a catalogue service containing information about:
* scenes/tiles
* orbits
* all available meta-data for individual scene and orbit
* link to quick-look images
* support for various processing levels (L1C, L2A, etc.)
* information about planned future acquisitions with a rich number of criteria (including at least satellite unit, instrument mode, polarisation, geographical area, time window) The catalogue service supports querying based on number of criteria:
* geographical area
* bounding box (rectangle)
* geometry (e.g. for agriculture fields)
* mission/sensor
* cloud coverage
* based on scene meta-data
* if more detailed cloud data are available, also based on location
* time interval
* absolute based on "date" or "date and time" ranges defined using ISO8601 standard (from, to, from/to)
* relative time intervals ("last week", "last month", last 2 months", "from last 6 months up to last 3 months", etc.)
* advanced time intervals ("months May-August in from 1985 to 2016)
* mosaicking priority (most recent/least recent/least cloudy) INSPIRE and GEOSS compatibility is ensured for relevant data-sets.
Dataset produced in the frame of CANDELA will also be catalogued to make them
findable. In the beginning, data will be referenced in a CANDELA specific
catalogue but extending the referencing to the CreoDIAS catalogue is an
interesting option. A GeoServer [7] tool instance is used to run processing
task on the platform and can be used to manage data. A dedicated catalogue
such as GeoNetwork [8] can also be used to manage the metadata.
On the other hand, a GeoNetwork catalogue provides capabilities to describe
data standards (Inspire and ISO). It also allows searching data with keywords
or geographical extents. The following information can be used to create
metadata files:
* Metadata file creation date.
* Version of the data.
* Projection.
* Acquisition date.
* Resolution.
* Acquisition sensor.
The previous information will help making the data discoverable.
### 2.1.2 Making data openly accessible
Once the datasets are on the CANDELA platform, access permissions (who can
access what) can be defined. Depending on the willingness of the data owner, a
data sharing strategy will be drawn. Possible access permissions can be:
* Data without any restriction: all users can visualize, edit and download data without constraints.
* Protected data: rights will be defined for users in order to visualize, edit or download the data.
### 2.1.3 Making data interoperable
Datasets on the CANDELA platform will be interoperable due to their compliance
with international standards such as Web Feature Service (WFS), Web Map
Service (WMS) and Web Coverage Service (WCS). GeoServer and OWSLib [9] Python
library are used as implementation of these protocols.
It is worth noting that additional standards can be defined in the life cycle
of the project. For example, all satellite images will be in JPEG2000 or
GeoTIFF format and they will be associated with metadata and provided through
WMS services.
### 2.1.4 Increase data re-use (through clarifying licenses)
For open source data such as Sentinel images, used licenses are indicated in
the metadata files. Users need to check these licenses before using or sharing
the data. On the other hand, use strategies will be defined for data and
services that will be produced as part of the project.
### 2.1.5 Current status on CANDELA project
At this time of the project, each processing service decides on the
publication or not of the produced data on GeoServer. The needs for
cataloguing on the data resulting from CANDELA processing services depends on
the dissemination strategy and will be discussed internally between the
partners before setting up the GeoNetwork catalogue according to these needs.
# 3 CANDELA project datasets
## 3.1 SUC1_Urbanization
This sub-use case is concerned with studying the effect of urban expansion on
agricultural areas due to the continuous development of human settlements and
climatic changes in the regions of Sicoval, Bordeaux and Milan. These regions
of interest are well known for the continuous build-up of urban areas and
richness and agricultural zones. Hence, the following datasets have been
identified:
* Sentinel-2 images are used for the change detection pipeline developed by TAS FR.
* Sentinel-1 images are used for the change detection pipeline developed by TAS IT.
* Sentinel-1 and Sentinel-2 images are used for the data mining modules developed by DLR.
* Sentinel-1 and Sentinel-2 images are used for the semantic search tool developed by IRIT.
* Very High-Resolution optical images are used to validate the results of the change detection pipelines.
Please note the deviation of the used datasets in this sub-use case. For
instance, the VHR datasets are limited to optical Pleaides images coming from
the DWH mechanism and only for the region of Bordeaux in France due to quota
limitation. Additionally, no VHR SAR images were found for the Bordeaux
region.
### 3.1.1 Dataset reference and name
<table>
<tr>
<th>
**Dataset**
</th>
<th>
**Summary**
</th> </tr>
<tr>
<td>
Sentinel-2
</td>
<td>
Sentinel-2 satellite images are optical datasets produced as part of the EU
Copernicus program.
These data are collected from Sentinel-2 mission that comprises a
constellation of two polar-orbiting satellites.
Images coming from Sentinel-2 mission are characterized with a rich spectral
resolution (13 bands), a high spatial resolution (10m panchromatic band) and a
high acquisition frequency (5 days in cloudfree conditions). Hence, they are
very suitable for monitoring variability in land surface.
The datasets are in ESA SAFE format and contain inside one tile with all
spectral bands.
</td> </tr>
<tr>
<td>
SPOT6/7
</td>
<td>
Sentinel-1 satellite images are SAR datasets produced as part of the EU
Copernicus program. The Sentinel-1 mission consists of two satellites
operating day and night to perform C-band synthetic aperture radar imaging and
allow an acquisition frequency of 6 days for the same place. Data are acquired
according to different modes and the one that will be used for this use case
is the Interferometric Wideswath mode (IW). This mode exists at a different
level of correction, namely, the Single Look Complex (SLC) and the Ground
Range Detected (GRD).
</td> </tr>
<tr>
<td>
Pleiades
</td>
<td>
Pleiades is a constellation of two very high-resolution sensors that is able
to acquire any point on earth in under 24 hours. Pleaides has a narrower field
of view when compared to SPOT6/7. However, it
</td> </tr>
<tr>
<td>
</td>
<td>
provides images with a finer spatial resolution (50 cm). Pleiades images also
have 4 spectral bands at 2 m and a panchromatic band at 50 cm.
</td> </tr> </table>
### 3.1.2 Dataset description
<table>
<tr>
<th>
**ROI**
</th>
<th>
**Dataset**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
_Sicoval_
</td>
<td>
_Sentinel-2_
</td>
<td>
* _Period: Images acquired in the period between 2016 and 2019._
* _Surface: 700 km2._
* _Coordinates: xMin,yMin 1.39661,43.3902 : xMax,yMax_
_1.71621,43.5669 (WGS84)._
* _S2 tilles: 31TCJ._
</td> </tr>
<tr>
<td>
_Sentinel-1_
</td>
<td>
* _Period: Images acquired in the period between 2016 and 2019._
* _Surface: 700 km2._
* _Coordinates: xMin,yMin 1.39661,43.3902 : xMax,yMax_
_1.71621,43.5669 (WGS84)._
</td> </tr>
<tr>
<td>
_Bordeaux_
</td>
<td>
_Sentinel-2_
</td>
<td>
* _Period: Images acquired in the period between 2016 and 2019._
* _Surface: 500 km2._
* _Coordinates: xMin,yMin -0.703984,44.7483 : xMax,yMax -_
_0.46762,44.9188 (WGS84)._
* _S2 tilles: 30TXQ._
</td> </tr>
<tr>
<td>
_Optical VHR (Pleiades)_
</td>
<td>
* _Period: Images acquired in the period between summer 2013 and summer 2019._
* _Surface: 500 km2._
* _Coordinates: xMin,yMin -0.703984,44.7483 : xMax,yMax -_
_0.46762,44.9188 (WGS84)._
</td> </tr>
<tr>
<td>
_Sentinel-1_
</td>
<td>
* _Period: Images acquired in the period between 2016 and 2019._
* _Surface: 500 km2._
* _Coordinates: xMin,yMin -0.703984,44.7483 : xMax,yMax -_
_0.46762,44.9188 (WGS84)._
</td> </tr>
<tr>
<td>
_Milan_
</td>
<td>
_Sentinel-2_
</td>
<td>
* _Period: Images acquired in the period between 2016 and 2019._
* _Surface: 500 km2._
* _Coordinates: xMin,yMin 9.05816,45.3628 : xMax,yMax_
_9.29786,45.5361 (WGS84)._
* _S2 tilles: 32TNR._
</td> </tr>
<tr>
<td>
_Sentinel-1_
</td>
<td>
* _Period: Images acquired in the period between 2016 and 2019._
* _Surface: 500 km2._
* _Coordinates: xMin,yMin 9.05816,45.3628 : xMax,yMax_
_9.29786,45.5361 (WGS84)._
</td> </tr> </table>
### 3.1.3 Standards and metadata
<table>
<tr>
<th>
**Dataset**
</th>
<th>
**Standards**
</th>
<th>
**Metadata**
</th> </tr>
<tr>
<td>
Sentinel-2 _https://sentinel.esa.int/web/se_ _ntinel/user-
guides/sentinel-2msi/data-formats_
</td>
<td>
13 JPEG2000 images, one for each band.
</td>
<td>
XML file
</td> </tr>
<tr>
<td>
Pleiades _https://www.intelligenceairbusds.com/en/8723pleiades-and-
spot-6-7-formatdelivery_
</td>
<td>
2 JPEG2000 images, one for the B,G,R and NIR bands at 2 m spatial resolution
and the other for the panchromatic band at 0.5 m spatial resolution.
</td>
<td>
XML file
</td> </tr>
<tr>
<td>
Sentinel-1 _https://earth.esa.int/web/sent_ _inel/technical-guides/sentinel-_
_1-sar/products-_
_algorithms/level-1-productformatting_
</td>
<td>
Image data are stored in a Geotiff format.
</td>
<td>
XML file and Geotiff SAR image
</td> </tr> </table>
### 3.1.4 Data Sharing
Sentinel-1 and Sentinel-2 data are free of charge and can be downloaded from
Copernicus Open Access Hub upon subscription. However, Pleaides images are
commercial data and can only be accessed by project partners.
### 3.1.5 Archiving and preservation
All the data described above will be conserved on the data server of the
platform during all the project life time for demonstration purposes. However,
since some datasets are commercial ones, they will have access restrictions.
## 3.2 SUC2_Vineyard
This sub-use case aims at assessing the damage level in vineyards caused by
natural hazards such as frost and hail. For this purpose, the following
datasets were identified for this sub-use case:
• Since the assessment of vineyards damage needs to be done as soon as the
event happens, Sentinel 2 images will be used for this use case.
Please note the deviation of this section when compared to the first version
of this document. The use of VHR satellite images will be challenging as it
requires programming the acquisition before and after a natural disaster,
which is difficult if not impossible.
### 3.2.1 Dataset reference and name
<table>
<tr>
<th>
**Dataset**
</th>
<th>
**Summary**
</th> </tr>
<tr>
<td>
Sentinel-2
</td>
<td>
Sentinel-2 satellite images are optical datasets produced as part of the EU
Copernicus program.
These data are collected from Sentinel-2 mission that comprises a
constellation of two polar-orbiting satellites.
Images coming from Sentinel-2 mission are characterized with a rich spectral
resolution (13 bands), a high spatial resolution (10m panchromatic band) and a
high acquisition frequency (5 days in cloudfree conditions). Hence, they are
very suitable for monitoring variability in land surface.
The datasets are in ESA SAFE format and contain inside one tile with all
spectral bands.
</td> </tr> </table>
### 3.2.2 Dataset description
<table>
<tr>
<th>
**ROI**
</th>
<th>
**Dataset**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
_Bordeaux vineyards_
</td>
<td>
_Sentinel-2_
</td>
<td>
* _Period: two Images, the first one was acquired on 19 th April 2017 and the second image on the 29 th April 2017 (after and before a frost event). _
* _Surface: 1233 km 2 . _
* _Coordinates: xMin,yMin -0.374589,44.5206 : xMax,yMax 0.0949954,44.7976 (WGS84)._
* _S2 tilles: 30TYQ._
</td> </tr> </table>
### 3.2.3 Standards and metadata
<table>
<tr>
<th>
**Dataset**
</th>
<th>
**Standards**
</th>
<th>
**Metadata**
</th> </tr>
<tr>
<td>
Sentinel-2
_https://sentinel.esa.int/web/sentinel/userguides/sentinel-2-msi/data-formats_
</td>
<td>
13 JPEG2000 images, one for each band.
</td>
<td>
XML file
</td> </tr> </table>
### 3.2.4 Data Sharing
Sentinel-2 data are free of charge and can be downloaded from Copernicus Open
Access Hub upon subscription.
### 3.2.5 Archiving and preservation
All the data described above will be conserved on the data server of the
platform during all the project life time for demonstration purposes. However,
since some datasets are commercial ones, they will have access restrictions.
## 3.3 SUC 1 _ Forest-Disaster
This case concerns occurring disasters in forests (windfalls) on the example
of RDLP Toruń in 2017. The study plans to analyze the extent of the occurrence
of the disaster, study the area and extent of damage. An analysis of the
activities carried out by the forestry services in the areas affected by the
disaster is also carried out.
Satellite images acquired using different sensors can be used for this sub-use
case.
* Sentinel-2 and Sentinel 1 datasets will be used to change detection layer with information about areas that were affected by windthrow. For this purpose, Sentinel satellites appear to be adequate solution, as they provide large area coverage and frequent 5 days revisit time.
* VHR satellites (like SPOT 6/7, GeoEye-1) will be used to verify the results of the change detection algorithms.
* The initial use of high-resolution data has been changed due to limited access to VHR data and the time-consuming adaptation of algorithms to each type of satellite data. 3.3.1 Dataset reference and name
<table>
<tr>
<th>
**Dataset**
</th>
<th>
**Summary**
</th> </tr>
<tr>
<td>
Sentinel-1
</td>
<td>
First satellite in Copernicus Program with C-SAR instrument which can operate
in four imaging modes with different resolution (down to 25m). The mission is
composed of two satellites, sharing the same orbital plane.
</td> </tr>
<tr>
<td>
Setninel-2
</td>
<td>
Sentinel-2 satellite images are optical datasets gathered within the EU
Copernicus program.
The Sentinel-2 mission from which the data originates consists of a
constellation of two satellites orbiting the Earth. Characteristics of
Sentinel-2 based images are: rich spectral resolution (13 bands), a high
spatial resolution (10m panchromatic band) and a high acquisition frequency (5
days in cloud-free conditions).
</td> </tr>
<tr>
<td>
Pleiades
</td>
<td>
Pleiades are two satellites operated by CNES (Space Agency of France). They
provide high resolution data for panchromatic band (0.7 m) and multispectral
bands (2.8m).
</td> </tr>
<tr>
<td>
GeoEye -1
</td>
<td>
GeoEye-1 captures optical images since 2008. Data resolution from this
satellite for the panchromatic band is below 1 meter and for the multispectral
below 2 m. It has high revisit time for 40° latitude target.
</td> </tr>
<tr>
<td>
**ROI**
</td>
<td>
**Dataset**
</td>
<td>
**Description**
</td> </tr>
<tr>
<td>
_Torun_1 and Torun_2_
</td>
<td>
_Sentinel-1_
</td>
<td>
* _Period: Images acquired just before 12.08.2017 and after_
* _Surface: 860 km2._
* _Coordinates: xMin,yMin 17.2997, 53.651802 : xMax,yMax_
_17.810054, 53.888559 (WGS84)_
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
•
</td>
<td>
_S2 tile: 33UXV and 34UCE_
</td> </tr>
<tr>
<td>
_Sentinel-2_
</td>
<td>
•
</td>
<td>
_Period: Images acquired just before 12.08.2017 and after_
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
•
</td>
<td>
_Surface: 860 km2._
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
•
</td>
<td>
_Coordinates: xMin,yMin 17.2997, 53.651802 : xMax,yMax_
_17.810054, 53.888559 (WGS84)_
</td> </tr>
<tr>
<td>
_Torun_2_
</td>
<td>
_GeoEye-1_
</td>
<td>
• •
•
</td>
<td>
_Period: Images acquired just before 12.08.2017 Surface: 700 km2._
_Coordinates: xMin,yMin 17.409115, 53.651802 : xMax,yMax 17.810054, 53.888559
(WGS84)_
</td> </tr>
<tr>
<td>
_Pleiades_
</td>
<td>
•
</td>
<td>
_Period: Images acquired just after 12.08.2017_
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
•
</td>
<td>
_Surface: 700 km2._
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
•
</td>
<td>
_Coordinates: xMin,yMin 17.409115, 53.651802 : xMax,yMax_
_17.810054, 53.888559 (WGS84)_
</td> </tr> </table>
### 3.3.3 Standards and metadata
<table>
<tr>
<th>
**Dataset**
</th>
<th>
**Standards**
</th>
<th>
**Metadata**
</th> </tr>
<tr>
<td>
Sentinel-1 _https://earth.esa.int/web/sent_ _inel/technical-guides/sentinel-_
_1-sar/products-_
_algorithms/level-1-productformatting_
</td>
<td>
Image data are stored in a Geotiff format.
</td>
<td>
XML file,
geotiff SAR image &
GTiff image
</td> </tr>
<tr>
<td>
Sentinel-2 _https://sentinel.esa.int/web/se_ _ntinel/user-
guides/sentinel-2msi/data-formats_
</td>
<td>
13 JPEG2000 images, one for each band.
</td>
<td>
XML file
</td> </tr>
<tr>
<td>
Pleiades
_https://www.intelligenceairbusds.com/en/8723pleiades-and-
spot-6-7-formatdelivery_
</td>
<td>
2 JPEG2000 images, one for the B,G,R and NIR bands at 2 m spatial resolution
and the other for the panchromatic band at 0.5 m spatial resolution.
</td>
<td>
XML file
</td> </tr>
<tr>
<td>
GeoEye-1 _https://www.euspaceimaging._ _com/about/satellites/geoeye1/_
</td>
<td>
2 GEOTIF images, one panchromatic and second multispectral (B, G, R and NIR).
</td>
<td>
XML file
</td> </tr> </table>
### 3.3.4 Data Sharing
Data collected from Copernicus program: Sentinel-1 and Sentinel-2 can be
download from Copernicus Open Access Hub.
Pleiades and GeoEye-1 originated data are distributed on commercial licence
and can only be accessed by project partners.
### 3.3.5 Archiving and preservation
The datasets mentioned above will be stored on the platforms data server while
project life cycle.
However, due to the commercial license to use certain data, access to certain
data sets will be limited.
## 3.4 SUC4_ Forest-Health
This case concerns tree stand analyses during the bark beetle invasion in
Białowieża. The analyses will include both large-scale and low-scale studies
of forest condition.
The following datasets were identified for this sub-use case:
• Satellite images with a large number of bands in the VNIR and SWIR range
will be useful (e.g. Sentinel-2)
Independent data for validation was obtained for this scenario and for that
reason VHR data was not used and acquired.
### 3.4.1 Dataset reference and name
<table>
<tr>
<th>
**Dataset**
</th>
<th>
**Summary**
</th> </tr>
<tr>
<td>
Sentinel-1
</td>
<td>
First satellite in Copernicus Program with C-SAR instrument which can operate
in four imaging modes with different resolution (down to 25 m). The mission is
composed of two satellites, sharing the same orbital plane.
</td> </tr>
<tr>
<td>
Setninel-2
</td>
<td>
Sentinel-2 satellite images are optical datasets gathered within the EU
Copernicus program.
The Sentinel-2 mission from which the data originates consists of a
constellation of two satellites orbiting the Earth. Characteristics of
Sentinel-2 based images are: rich spectral resolution (13 bands), a high
spatial resolution (10m panchromatic band) and a high acquisition frequency (5
days in cloud-free conditions).
</td> </tr> </table>
### 3.4.2 Dataset description
<table>
<tr>
<th>
**ROI**
</th>
<th>
**Dataset**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
_Białowieża_
</td>
<td>
_Sentinel-1_
</td>
<td>
* _Period: Images acquired from 2015 to 2018 in summer months_
* _Surface: 385 km2._
* _Coordinates: xMin,yMin 23.68852, 52.674583 : xMax,yMax_
_23.949071, 52.870889 (WGS84)_
* _S2 tile: 34UFD_
</td> </tr>
<tr>
<td>
_Sentinel-2_
</td>
<td>
* _Period: Images acquired from 2015 to 2018 in summer months_
* _Surface: 385 km2._
* _Coordinates: xMin,yMin 23.68852, 52.674583 : xMax,yMax 23.949071, 52.870889 (WGS84)_
</td> </tr> </table>
### 3.4.3 Standards and metadata
<table>
<tr>
<th>
**Dataset**
</th>
<th>
**Standards**
</th>
<th>
**Metadata**
</th> </tr>
<tr>
<td>
Sentinel-1 _https://earth.esa.int/web/sent_ _inel/technical-guides/sentinel-_
_1-sar/products-_
_algorithms/level-1-productformatting_
</td>
<td>
Image data are stored in a Geotiff format.
</td>
<td>
XML file,
geotiff SAR image &
GTiff image
</td> </tr>
<tr>
<td>
Sentinel-2 _https://sentinel.esa.int/web/se_ _ntinel/user-
guides/sentinel-2msi/data-formats_
</td>
<td>
13 JPEG2000 images, one for each band.
</td>
<td>
XML file
</td> </tr> </table>
### 3.4.4 Data Sharing
Data collected from Copernicus program: Sentinel-1 and Sentinel-2 can be
download from Copernicus Open Access Hub.
3.4.5 Archiving and preservation
The datasets mentioned above will be stored on the platforms data server while
project life cycle.
# 4 Additional datasets
As part of the datasets used in the project, this section provides information
about the use of Additional datasets (ADD) following the process stablished by
ESA through the Data Warehouse mechanism (DWH). Detailed information about the
project request and the use of ADD is available in deliverables D5.7 [5][10]
and D5.8 [12] respectively.
## 4.1 Request of additional datasets 2019
CANDELA project requested Additional datasets (ADD) for 2019. The official
request was included in the project deliverable D5.7 [5][10] (v1.0 submitted
in September 2018 and v2.0 submitted in February 2019). Below is presented the
final list of Additional datasets requested in February 2019.
**Figure**
**1**
**:**
**Consolidated Request February 2019 (D5.7)**
**[10]**
The project request was validated internally by the European Space Agency
(ESA) in August 2019 when the Agency published version 2.6[11] of the Data
Access Portfolio Document (DAP). As it is showed in Figure 2, the validation
included datasets requested for both versions of D5.7 [5][10].
**Figure**
**2**
**:**
**Assigned quota DAP version August 2019**
**[11]**
## 4.2 DWH use for 2019
The use of Additional datasets was reported in D5.8[12] following the two use
cases proposed in CANDELA:
* Use case 1: Agriculture and Macro-economics (see deliverables [3][13])
* Use case 2: Forest Health Monitoring (see deliverables [4][14])
D5.8[12] describes the use of ADD in the project focusing in the use for
validations purposes. Some examples include the use of VHR images obtained
from the DWH mechanism to validate using photo interpretation. In addition,
VHR images could also be used to verify the results obtained by machine
learning algorithms developed in the project. Sections 3 and 4 of the
aforementioned report D5.8[12] include detailed information of the use of
additional datasets in the project.
# 5 Conclusions
The second version of the DMP (this document) includes updated information of
the CANDELA DMP v1 released in December 2018[15].
The document describes the bold lines in the data management plan of CANDELA
project, including how to make the project data findable, accessible,
interoperable and reusable. As the CANDELA platform is not completed yet, more
details will emerge in the final DMP deliverable that better illustrate the
mechanism of data standards, storing, sharing, security and ingestion.
Main changes in this version are:
* The project datasets described in section 3 have been updated including detailed information for each project use case.
* The needs for cataloguing on the data resulting from CANDELA processing services will be discussed internally between project partners. Section 2.1.5 has been added to stand out the current status of this issue.
* Section 4 has been updated including information about the Additional datasets requested and used by CANDELA.
The next and final DMP (to be released in October 2020) will contain more
details about products and datasets produced using the analytic tools.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1391_CANDELA_776193.md
|
# Executive Summary
This deliverable is the first version of the project Data Management Plan
(DMP) and includes the initial information about the data that will be used by
CANDELA project.
The first DMP version follows the guidelines of the template provided by the
European Commission (Template Horizon 2020 Data Management Plan [1] [2]). The
Data Management task (T5.3) is included within the Project Management and
Coordination work package (WP5).
Following the aforementioned template, the document provided the initial rules
to achieve the FAIR principle for the project datasets.
This document includes dataset information for the project subcases (see
deliverables [3] and [4], Urbanization, Vineyard, Forest Disaster and Forest
Health). The information for each project dataset has been provided with the
same structure:
* Dataset reference
* Dataset Description
* Standards and metadata
* Data Sharing
* Archiving and preservation
The document also provides information about Additional Datasets that will be
used in the project, although detailed information will be provided at due
time in the DWH project documents [5].
# Introduction
## Purpose of the document
The purpose of the DMP is to provide an analysis of the main elements of the
data management policy that will be used by the project [6]. Following the DMP
template, this document provides for each dataset:
* Identifier and description
* Reference to existing suitable standards
* Description about data sharing and preservation
During the project lifetime, the document needs to be updated, so it has been
planned the release of three versions of the CANDELA DMP.
* D5.4 DMP v1: the first version of the document based on the DMP template provided by the European Commission [1][2].
* D5.5 DMP v2: the DMP will evolve and will be updated in October 2019 (M18).
* D5.6 DMP v3: the final version of the document is planned in October 2020 (M30)
## Relation to other project work
CANDELA project has planned reports about the request of Additional Datasets
thorough the Data Warehouse mechanism managed by ESA. The different versions
of the DMP will include the reference to these reports.
## Structure of the document
This document is structured in four major chapters described below:
* **Chapter 2** presents the Data Management Plan (FAIR data principle)
* **Chapter 3** presents the CANDELA project Datasets
* **Chapter 4** presents Additional Datasets
* **Chapter 5** presents the conclusions
# Data management Plan
This section provides the initial information according to the DMP template
[1][2] in order to achieve the FAIR principle for the project datasets.
## FAIR data
According to the DMP template, the CANDELA research data should be “FAIR”:
Findable, Accessible, Interoperable and Re-usable. This section describes the
procedures to follow the FAIR principle.
### Making data findable, including provisions for metadata
The CANDELA platform is built upon the CreoDIAS basis platform. Most of the
dataset used in the context of CANDELA are or will be directly retrieved
through API provided by CreoDIAS: Sentinel 1 and 2 images, Landsat images,
other Copernicus data...
CreoDIAS provides a catalogue service containing information about:
* scenes/tiles
* orbits
* all available meta-data for individual scene and orbit
* link to quick-look images
* support for various processing levels (L1C, L2A, etc.)
* information about planned future acquisitions with a rich number of criteria (including at least satellite unit, instrument mode, polarisation, geographical area, time window) The catalogue service supports querying based on number of criteria:
* geographical area
* bounding box (rectangle)
* geometry (e.g. for agriculture fields)
* mission/sensor
* cloud coverage
* based on scene meta-data
* if more detailed cloud data are available, also based on location
* time interval
* absolute based on "date" or "date and time" ranges defined using ISO8601 standard (from, to, from/to)
* relative time intervals ("last week", "last month", last 2 months", "from last 6 months up to last 3 months", etc.)
* advanced time intervals ("months May-August in from 1985 to 2016)
* mosaicking priority (most recent/least recent/least cloudy) INSPIRE and GEOSS compatibility is ensured for relevant data-sets.
Dataset produced in the frame of CANDELA will also be catalogued to make them
findable. In the beginning, data will be referenced in a CANDELA specific
catalogue but extending the referencing to the CreoDIAS catalogue is an
interesting option. The data can be managed using a data processing server
such as GeoServer [7] while a metadata catalogue such as GeoNetwork [8] can be
used to manage the metadata.
On the other hand, a GeoNetwork catalogue provides capabilities to describe
data standards (Inspire and ISO). It also allows searching data with keywords
or geographical extents. The following information can be used to create
metadata files:
* Metadata file creation date.
* Version of the data.
* Projection.
* Acquisition date.
* Resolution.
* Acquisition sensor.
The previous information will help making the data discoverable.
### Making data openly accessible
Once the datasets are on the CANDELA platform, access permissions (who can
access what) can be defined. Depending on the willingness of the data owner, a
data sharing strategy will be drawn. Possible access permissions can be:
* Data without any restriction: all users can visualize, edit and download data without constraints.
* Protected data: rights will be defined for users in order to visualize, edit or download the data.
### Making data interoperable
Datasets on the CANDELA platform will be interoperable due to their complaint
with international standards such as Web Feature Service (WFS), Web Map
Service (WMS) and Web Coverage Service (WCS). GeoServer and OWSLib [9] Python
library are used as implementation of these protocols.
It is worth noting that additional standards can be defined in the life cycle
of the project. For example, all satellite images will be in JPEG2000 or
GeoTIFF format and they will be associated with metadata and provided through
WMS services.
### Increase data re-use (through clarifying licenses)
For open source data such as Sentinel images, use licenses are indicated in
the metadata files. Users need to check these licenses before using or sharing
the data. On the other hand, use strategies will be defined for data and
services that will be produced as part of the project.
# CANDELA project datasets
## SUC1_Urbanization
This sub-use case is concerned with studying the effect of urban expansion on
agricultural areas due to the continuous development of human settlements and
climatic changes in the regions of Sicoval, Bordeaux and Milan. These regions
of interest are well known for the continuous build-up of urban areas and
richness and agricultural zones. Hence, the following datasets have been
identified:
* Sentinel-2 images will be used to validate the change detection algorithm developed by TAS FR.
* Optical VHR1 & 2 images such as SPOT6/7 and Pleiades to test the change detection algorithm developed by TAS FR.
* Sentinel-1 images to validate the change detection algorithm developed by TAS IT.
* SAR VHR1 & 2 images such as TerraSAR-X to test the change detection algorithm developed by TAS IT.
### Dataset reference and name
<table>
<tr>
<th>
**Dataset**
</th>
<th>
**Summary**
</th> </tr>
<tr>
<td>
Sentinel-2
</td>
<td>
Sentinel-2 satellite images are optical datasets produced as part of the EU
Copernicus program.
These data are collected from Sentinel-2 mission that comprises a
constellation of two polar-orbiting satellites.
Images coming from Sentinel-2 mission are characterized with a rich spectral
resolution (13 bands), a high spatial resolution (10m panchromatic band) and a
high acquisition frequency (5 days in cloudfree conditions). Hence, they are
very suitable for monitoring variability in land surface.
The datasets are in ESA SAFE format and contain inside one tile with all
spectral bands.
</td> </tr>
<tr>
<td>
SPOT6/7
</td>
<td>
SPOT6/7 is a commercial mission series to continue sustainable wideswath and
high-resolution optical earth observation provided by SPOT 5\. The SPOT6/7
mission allows for a high level of coverage up to 60-km image swath.
Additionally, images acquired by SPOT6/7 have a very high spatial resolution
(1.5m). SPOT6/7 has also the capacity of daily revisit due to the phase
constellation of SPOT6 and SPOT7. A SPOT6/7 satellite image consists of 4
spectral bands (B,G,R and NIR) at 6 m and a panchromatic band at 1.5 m.
</td> </tr>
<tr>
<td>
Pleiades
</td>
<td>
Pleiades is a constellation of two very high-resolution sensors that is able
to acquire any point on earth in under 24 hours. Pleaides has a narrower field
of view when compared to SPOT6/7. However, it provides images with a finer
spatial resolution (50 cm). Pleiades images also have 4 spectral bands at 2 m
and a panchromatic band at 50 cm.
</td> </tr>
<tr>
<td>
Sentinel-1
</td>
<td>
Sentinel-1 satellite images are SAR datasets produced as part of the EU
Copernicus program. The Sentinel-1 mission consists of two satellites
operating day and night to perform C-band synthetic aperture radar imaging and
allow an acquisition frequency of 6 days for the same place. Data are acquired
according to different modes and the one that will be used for this use case
is the Interferometric Wideswath mode (IW). This mode exists at different
level of correction, namely, the Single Look Complex (SLC) and the Ground
Range Detected (GRD).
</td> </tr>
<tr>
<td>
TerraSAR-X
</td>
<td>
TerraSAR-X is a mission that aims are providing very high-resolution SAR
images for scientific and commercial applications. This mission allows the
acquisition of SAR images using different modes such as Staring SpotLight (ST)
and ScanSAR (SC). TerraSAR-X products are available with different correction
levels such as Single Look Slant Range Complex and Mutli Look Ground Range
Detected.
</td> </tr> </table>
### Dataset description
<table>
<tr>
<th>
**ROI**
</th>
<th>
**Dataset**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
_Sicoval_
</td>
<td>
_Sentinel-2_
</td>
<td>
* _Period: Images acquired in the period between 2016 and 2019._
* _Surface: 700 km2._
* _Coordinates: xMin,yMin 1.39661,43.3902 : xMax,yMax_
_1.71621,43.5669 (WGS84)._
* _S2 tilles: 31TCJ._
</td> </tr>
<tr>
<td>
_Optical VHR_
_(SPOT6/7,_
_Pleiades,_
_WorldView)_
</td>
<td>
* _Period: Images acquired in the period between summer 2013 and summer 2019._
* _Surface: 700 km2._
* _Coordinates: xMin,yMin 1.39661,43.3902 : xMax,yMax_
_1.71621,43.5669 (WGS84)._
</td> </tr>
<tr>
<td>
_Sentinel-1_
</td>
<td>
* _Period: Images acquired in the period between 2016 and 2019._
* _Surface: 700 km2._
* _Coordinates: xMin,yMin 1.39661,43.3902 : xMax,yMax_
_1.71621,43.5669 (WGS84)._
</td> </tr>
<tr>
<td>
_SAR VHR_
_(TerraSAR-X)_
</td>
<td>
* _Period: Images acquired in the period between summer 2013 and summer 2019._
* _Surface: 700 km2._
* _Coordinates: xMin,yMin 1.39661,43.3902 : xMax,yMax_
_1.71621,43.5669 (WGS84)._
</td> </tr>
<tr>
<td>
_Bordeaux_
</td>
<td>
_Sentinel-2_
</td>
<td>
* _Period: Images acquired in the period between 2016 and 2019._
* _Surface: 500 km2._
* _Coordinates: xMin,yMin -0.703984,44.7483 : xMax,yMax -_
_0.46762,44.9188 (WGS84)._
* _S2 tilles: 30TXQ._
</td> </tr>
<tr>
<td>
</td>
<td>
_Optical VHR_
_(SPOT6/7,_
_Pleiades,_
_WorldView)_
</td>
<td>
•
•
•
</td>
<td>
_Period: Images acquired in the period between summer 2013 and summer 2019._
_Surface: 500 km2._
_Coordinates: xMin,yMin -0.703984,44.7483 : xMax,yMax -_
_0.46762,44.9188 (WGS84)._
</td> </tr>
<tr>
<td>
_Sentinel-1_
</td>
<td>
•
</td>
<td>
_Period: Images acquired in the period between 2016 and 2019._
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
•
</td>
<td>
_Surface: 500 km2._
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
•
</td>
<td>
_Coordinates: xMin,yMin -0.703984,44.7483 : xMax,yMax 0.46762,44.9188
(WGS84)._
</td> </tr>
<tr>
<td>
_SAR VHR_
_(TerraSAR-X)_
</td>
<td>
•
</td>
<td>
_Period: Images acquired in the period between summer 2013 and summer 2019._
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
•
</td>
<td>
_Surface: 500 km2._
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
•
</td>
<td>
_Coordinates: xMin,yMin -0.703984,44.7483 : xMax,yMax 0.46762,44.9188
(WGS84)._
</td> </tr>
<tr>
<td>
_Milan_
</td>
<td>
_Sentinel-2_
</td>
<td>
•
</td>
<td>
_Period: Images acquired in the period between 2016 and 2019._
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
•
</td>
<td>
_Surface: 500 km2._
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
•
</td>
<td>
_Coordinates: xMin,yMin 9.05816,45.3628 : xMax,yMax 9.29786,45.5361 (WGS84)._
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
•
</td>
<td>
_S2 tilles: 32TNR._
</td> </tr>
<tr>
<td>
_Optical VHR_
_(SPOT6/7,_
_Pleiades,_
_WorldView)_
</td>
<td>
•
•
•
</td>
<td>
_Period: Images acquired in the period between summer 2013 and summer 2019._
_Surface: 500 km2._
_Coordinates: xMin,yMin 9.05816,45.3628 : xMax,yMax_
_9.29786,45.5361 (WGS84)._
</td> </tr>
<tr>
<td>
_Sentinel-1_
</td>
<td>
•
</td>
<td>
_Period: Images acquired in the period between 2016 and 2019._
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
•
</td>
<td>
_Surface: 500 km2._
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
•
</td>
<td>
_Coordinates: xMin,yMin 9.05816,45.3628 : xMax,yMax 9.29786,45.5361 (WGS84)._
</td> </tr>
<tr>
<td>
_SAR VHR_
_(TerraSAR-X)_
</td>
<td>
•
</td>
<td>
_Period: Images acquired in the period between summer 2013 and summer 2019._
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
•
</td>
<td>
_Surface: 500 km2._
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
•
</td>
<td>
_Coordinates: xMin,yMin 9.05816,45.3628 : xMax,yMax 9.29786,45.5361 (WGS84)._
</td> </tr> </table>
### Standards and metadata
<table>
<tr>
<th>
**Dataset**
</th>
<th>
**Standards**
</th>
<th>
**Metadata**
</th> </tr>
<tr>
<td>
Sentinel-2 _https://sentinel.esa.int/web/se_ _ntinel/user-
guides/sentinel-2msi/data-formats_
</td>
<td>
13 JPEG2000 images, one for each band.
</td>
<td>
XML file
</td> </tr>
<tr>
<td>
SPOT6/7 _https://www.intelligenceairbusds.com/en/8723pleiades-and-
spot-6-7-formatdelivery_
</td>
<td>
2 JPEG2000 images, one for the B,G,R and NIR bands at 6 m spatial resolution
and the other for the panchromatic band at 1.5 m spatial resolution.
</td>
<td>
XML file
</td> </tr>
<tr>
<td>
Pleiades _https://www.intelligenceairbusds.com/en/8723pleiades-and-
spot-6-7-formatdelivery_
</td>
<td>
2 JPEG2000 images, one for the B,G,R and NIR bands at 2 m spatial resolution
and the other for the panchromatic band at 0.5 m spatial resolution.
</td>
<td>
XML file
</td> </tr>
<tr>
<td>
Sentinel-1 _https://earth.esa.int/web/sent_ _inel/technical-guides/sentinel-_
_1-sar/products-_
_algorithms/level-1-productformatting_
</td>
<td>
Image data are stored in a Geotiff format.
</td>
<td>
XML file, geotiff SAR image &
GTiff image
</td> </tr>
<tr>
<td>
TerraSAR-X _https://www.intelligenceairbusds.com/files/pmedia/pu_
_blic/r459_9_20171004_tsxxairbusds-ma-0009_tsxproductguide_i2.01.pdf_
</td>
<td>
DLF COSAR binary format.
</td>
<td>
XML file
</td> </tr> </table>
### Data Sharing
Sentinel-1 and Sentinel-2 data are free of charge and can be downloaded from
Copernicus Open Access Hub upon subscription. However, SPOT6/7 and Pleaides
images are commercial data and can only be accessed by project partners.
### Archiving and preservation
All the data described above will be conserved on the data server of the
platform during all the project life time for demonstration purposes. However,
since some datasets are commercial ones, they will have access restrictions.
## SUC2_Vineyard
This sub-use case aims at assessing the damage level in vineyards caused by
natural hazards such as frost and hail. For this purpose, the following
datasets were identified for this sub-use case:
* Since the assessment of vineyards damage needs to be done as soon as the event happens, Sentinel 2 images will be used for this use case.
* In a further step and once good results are obtained using Sentinel-2 images, very highresolution images, such as SPOT 6/7 and Pleiades, will be also considered. (Dates of the HR images to be defined)
### Dataset reference and name
<table>
<tr>
<th>
**Dataset**
</th>
<th>
**Summary**
</th> </tr>
<tr>
<td>
Sentinel-2
</td>
<td>
Sentinel-2 satellite images are optical datasets produced as part of the EU
Copernicus program.
These data are collected from Sentinel-2 mission that comprises a
constellation of two polar-orbiting satellites.
Images coming from Sentinel-2 mission are characterized with a rich spectral
resolution (13 bands), a high spatial resolution (10m panchromatic band) and a
high acquisition frequency (5 days in cloudfree conditions). Hence, they are
very suitable for monitoring variability in land surface.
The datasets are in ESA SAFE format and contain inside one tile with all
spectral bands.
</td> </tr>
<tr>
<td>
SPOT6/7
</td>
<td>
SPOT6/7 is a commercial mission series to continue sustainable wideswath and
high-resolution optical earth observation provided by SPOT 5\. The SPOT6/7
mission allows for a high level of coverage up to 60-km image swath.
Additionally, images acquired by SPOT6/7 have a very high spatial resolution
(1.5m). SPOT6/7 has also the capacity of daily revisit due to the phase
constellation of SPOT6 and SPOT7. A SPOT6/7 satellite image consists of 4
spectral bands (B,G,R and NIR) at 6 m and a panchromatic band at 1.5 m.
</td> </tr>
<tr>
<td>
Pleiades
</td>
<td>
Pleiades is a constellation of two very high-resolution sensors that is able
to acquire any point on earth in under 24 hours. Pleaides has a narrower field
of view when compared to SPOT6/7. However, it provides images with a finer
spatial resolution (50 cm). Pleiades images also have 4 spectral bands at 2 m
and a panchromatic band at 50 cm.
</td> </tr> </table>
### Dataset description
<table>
<tr>
<th>
**ROI**
</th>
<th>
**Dataset**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
_Bordeaux vineyards_
</td>
<td>
_Sentinel-2_
</td>
<td>
* _Period: two Images, the first one was acquired on 19 th April 2017 and the second image on the 29 th April 2017 (after and before a frost event). _
* _Surface: 1233 km 2 . _
* _Coordinates: xMin,yMin -0.374589,44.5206 : xMax,yMax 0.0949954,44.7976 (WGS84)._
* _S2 tilles: 30TYQ._
</td> </tr> </table>
### Standards and metadata
<table>
<tr>
<th>
**Dataset**
</th>
<th>
**Standards**
</th>
<th>
**Metadata**
</th> </tr>
<tr>
<td>
Sentinel-2
_https://sentinel.esa.int/web/sentinel/userguides/sentinel-2-msi/data-formats_
</td>
<td>
13 JPEG2000 images, one for each band.
</td>
<td>
XML file
</td> </tr> </table>
### Data Sharing
Sentinel-2 data are free of charge and can be downloaded from Copernicus Open
Access Hub upon subscription.
### Archiving and preservation
All the data described above will be conserved on the data server of the
platform during all the project life time for demonstration purposes. However,
since some datasets are commercial ones, they will have access restrictions.
## SUC3_ Forest-Disaster
This case concerns occurring disasters in forests (windfalls) on the example
of RDLP Toruń in 2017. The study plans to analyze the extent of the occurrence
of the disaster, study the area and extent of damage. An analysis of the
activities carried out by the forestry services in the areas affected by the
disaster is also carried out. These include forest management, reclamation,
planting or permission to self-heal the forest.
Satellite images acquired using different sensors can be used for this sub-use
case.
* Sentinel-2 and Sentinel 1 datasets will be used to create binary change detection layer with information about areas that were affected by windthrow. For this purpose, Sentinel satellites appear to be adequate solution, as they provide large area coverage and frequent 5 days revisit time.
* VHR satellites (like SPOT 6/7, WorldView) will be used to determine percentage of damage in the indicated area and to create progress repair forest
### Dataset reference and name
<table>
<tr>
<th>
**Dataset**
</th>
<th>
**Summary**
</th> </tr>
<tr>
<td>
Sentinel-1
</td>
<td>
First satellite in Copernicus Program with C-SAR instrument which can operate
in four imaging modes with different resolution (down to 25m). The mission is
composed of two satellites, sharing the same orbital plane.
</td> </tr>
<tr>
<td>
Setninel-2
</td>
<td>
Sentinel-2 satellite images are optical datasets gathered within the EU
Copernicus program.
The Sentinel-2 mission from which the data originates consists of a
constellation of two satellites orbiting the Earth. Characteristics of
Sentinel-2 based images are: rich spectral resolution (13 bands), a high
spatial resolution (10m panchromatic band) and a high acquisition frequency (5
days in cloud-free conditions).
</td> </tr>
<tr>
<td>
SPOT 6/7
</td>
<td>
SPOT 6/7 are series of commercial missions, succeeding SPOT 5, that provided
sustainable Earth observation of high resolution with optical sensors.
Satellite images obtained by SPOT 6/7 consists of 5 spectral bands – blue,
green, red, near infrared and panchromatic. Spatial resolution of first 4 ones
is 6 m, while GSD of panchromatic band is 1.5 m. Constellation of SPOT6/7
allows daily revisit, and its swath is up to 60 km wide.
</td> </tr>
<tr>
<td>
WorldView3
</td>
<td>
WorldView is a series of commercial Earth observing missions operated by
DigitalGlobe. Its most distinguishing characteristic is very high spatial
resolution. The latest mission – WorldView 4 provides images in multispectral
channels: blue, green, red at spatial resolution of 124 cm/pixel and 31
cm/pixel for panchromatic images. Its repeat interval is 3 days.
</td> </tr> </table>
### Dataset description
<table>
<tr>
<th>
**ROI**
</th>
<th>
**Dataset**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
_Torun_1_
</td>
<td>
_Sentinel-1_
</td>
<td>
* _Period: Images acquired just before 12.08.2017 and after_
* _Surface: 700 km2._
* _Coordinates: xMin,yMin 17.2997, 53.20327 : xMax,yMax 17.73748,_
_53.46858 (WGS84)_
</td> </tr>
<tr>
<td>
_Sentinel-2_
</td>
<td>
* _Period: Images acquired just before 12.08.2017 and after_
* _Surface: 700 km2._
* _Coordinates: xMin,yMin 17.2997, 53.20327 : xMax,yMax 17.73748, 53.46858 (WGS84)_
</td> </tr>
<tr>
<td>
_Optical VHR_
_(SPOT6/7,_
_Pleiades,_
_WorldView)_
</td>
<td>
* _Period: Images acquired just before 12.08.2017 and after_
* _Surface: 700 km2._
* _Coordinates: xMin,yMin 17.2997, 53.20327 : xMax,yMax 17.73748, 53.46858 (WGS84)_
</td> </tr> </table>
<table>
<tr>
<th>
**ROI**
</th>
<th>
**Dataset**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
_Torun_2_
</td>
<td>
_Sentinel-1_
</td>
<td>
* _Period: Images acquired just before 12.08.2017 and after_
* _Surface: 860 km2._
* _Coordinates: xMin,yMin 17.409115, 53.651802 : xMax,yMax 17.810054, 53.888559 (WGS84)_
</td> </tr>
<tr>
<td>
_Sentinel-2_
</td>
<td>
* _Period: Images acquired just before 12.08.2017 and after_
* _Surface: 860 km2._
* _Coordinates: xMin,yMin 17.409115, 53.651802 : xMax,yMax 17.810054, 53.888559 (WGS84)_
</td> </tr>
<tr>
<td>
_Optical VHR_
_(SPOT6/7,_
_Pleiades,_
_WorldView)_
</td>
<td>
* _Period: Images acquired just before 12.08.2017 and after_
* _Surface: 860 km2._
* _Coordinates: xMin,yMin 17.409115, 53.651802 : xMax,yMax 17.810054, 53.888559 (WGS84)_
</td> </tr> </table>
### Standards and metadata
<table>
<tr>
<th>
**Dataset**
</th>
<th>
**Standards**
</th>
<th>
**Metadata**
</th> </tr>
<tr>
<td>
Sentinel-1 _https://earth.esa.int/web/sent_ _inel/technical-guides/sentinel-_
_1-sar/products-_
_algorithms/level-1-productformatting_
</td>
<td>
Image data are stored in a Geotiff format.
</td>
<td>
XML file,
geotiff SAR image &
GTiff image
</td> </tr>
<tr>
<td>
Sentinel-2 _https://sentinel.esa.int/web/se_ _ntinel/user-
guides/sentinel-2msi/data-formats_
</td>
<td>
13 JPEG2000 images, one for each band.
</td>
<td>
XML file
</td> </tr>
<tr>
<td>
SPOT6/7 _https://www.intelligenceairbusds.com/en/8723pleiades-and-
spot-6-7-formatdelivery_
</td>
<td>
2 JPEG2000 images, one for the B,G,R and NIR bands at 6 m spatial resolution
and the other for the panchromatic band at 1.5 m spatial resolution.
</td>
<td>
XML file
</td> </tr>
<tr>
<td>
TerraSAR-X _https://www.intelligenceairbusds.com/files/pmedia/pu_
_blic/r459_9_20171004_tsxxairbusds-ma-0009_tsxproductguide_i2.01.pdf_
</td>
<td>
DLF COSAR binary format.
</td>
<td>
XML file
</td> </tr>
<tr>
<td>
WorldView-3 _https://dg-cms-uploadsproduction.s3.amazonaws.com
/uploads/document/file/95/D_
_G2017_WorldView-3_DS.pdf_
</td>
<td>
7 multispectral and 7 SWIR GeoTIFF images
</td>
<td>
XML file
</td> </tr> </table>
### Data Sharing
Data collected from Copernicus program: Sentinel-1 and Sentinel-2 can be
download from Copernicus Open Access Hub.
SPOT6/7 and Pleaides originated data are distributed on commercial licence and
can only be accessed by project partners.
### Archiving and preservation
The datasets mentioned above will be stored on the platforms data server while
project life cycle.
However, due to the commercial license to use certain data, access to certain
data sets will be limited.
## SUC4_ Forest-Health
This case concerns tree stand analyses during the bark beetle invasion in
Białowieża. The analyzes will include both large-scale and low-scale studies
of forest condition.
The following datasets were identified for this sub-use case:
* Satellite images with a large number of bands in the VNIR and SWIR range will be useful (e.g. Sentinel-2)
* In a further step, high resolution images, such as SPOT 6/7 and Pleiades, will be also considered to validate the date.
* The use of radar data to determine the health of the forest will also be tested.
### Dataset reference and name
<table>
<tr>
<th>
**Dataset**
</th>
<th>
**Summary**
</th> </tr>
<tr>
<td>
Sentinel-1
</td>
<td>
First satellite in Copernicus Program with C-SAR instrument which can operate
in four imaging modes with different resolution (down to 25 m). The mission is
composed of two satellites, sharing the same orbital plane.
</td> </tr>
<tr>
<td>
Setninel-2
</td>
<td>
Sentinel-2 satellite images are optical datasets gathered within the EU
Copernicus program.
The Sentinel-2 mission from which the data originates consists of a
constellation of two satellites orbiting the Earth. Characteristics of
Sentinel-2 based images are: rich spectral resolution (13 bands), a high
spatial resolution (10m panchromatic band) and a high acquisition frequency (5
days in cloud-free conditions).
</td> </tr>
<tr>
<td>
SPOT 6/7
</td>
<td>
SPOT 6/7 are series of commercial missions, succeeding SPOT 5, that provided
sustainable Earth observation of high resolution with optical sensors.
Satellite images obtained by SPOT 6/7 consists of 5 spectral bands – blue,
green, red, near infrared and panchromatic. Spatial resolution of first 4 ones
is 6 m, while GSD of panchromatic band is 1.5 m. Constellation of SPOT6/7
allows daily revisit, and its swath is up to 60 km wide.
</td> </tr>
<tr>
<td>
WorldView
</td>
<td>
WorldView is a series of commercial Earth observing missions operated by
DigitalGlobe. Its most distinguishing characteristic is very high spatial
resolution. The latest mission – WorldView 4 provides images in multispectral
channels: blue, green, red at spatial resolution of 124 cm/pixel and 31
cm/pixel for panchromatic images. Its repeat interval is 3 days.
</td> </tr> </table>
### Dataset description
<table>
<tr>
<th>
**ROI**
</th>
<th>
**Dataset**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
_Białowieża_
</td>
<td>
_Sentinel-1_
</td>
<td>
• _Period: Images acquired from 2015 to 2018 in summer months_ • _Surface: 385
km2._
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
•
</td>
<td>
_Coordinates: xMin,yMin 23.68852, 52.674583 : xMax,yMax_
_23.949071, 52.870889 (WGS84)_
</td> </tr>
<tr>
<td>
_Sentinel-2_
</td>
<td>
•
</td>
<td>
_Period: Images acquired from 2015 to 2018 in summer months_
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
•
</td>
<td>
_Surface: 385 km2._
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
•
</td>
<td>
_Coordinates: xMin,yMin 23.68852, 52.674583 : xMax,yMax_
_23.949071, 52.870889 (WGS84)_
</td> </tr>
<tr>
<td>
_Optical VHR_
_(SPOT6/7,_
_Pleiades,_
_WorldView)_
</td>
<td>
• •
•
</td>
<td>
_Period: Images acquired from 2015 to 2018 in summer months Surface: 385 km2._
_Coordinates: xMin,yMin 23.68852, 52.674583 : xMax,yMax 23.949071, 52.870889
(WGS84)_
</td> </tr> </table>
### Standards and metadata
<table>
<tr>
<th>
**Dataset**
</th>
<th>
**Standards**
</th>
<th>
**Metadata**
</th> </tr>
<tr>
<td>
Sentinel-1 _https://earth.esa.int/web/sent_ _inel/technical-guides/sentinel-_
_1-sar/products-_
_algorithms/level-1-productformatting_
</td>
<td>
Image data are stored in a Geotiff format.
</td>
<td>
XML file,
geotiff SAR image &
GTiff image
</td> </tr>
<tr>
<td>
Sentinel-2 _https://sentinel.esa.int/web/se_ _ntinel/user-
guides/sentinel-2msi/data-formats_
</td>
<td>
13 JPEG2000 images, one for each band.
</td>
<td>
XML file
</td> </tr>
<tr>
<td>
SPOT6/7 _https://www.intelligenceairbusds.com/en/8723pleiades-and-
spot-6-7-formatdelivery_
</td>
<td>
2 JPEG2000 images, one for the B,G,R and NIR bands at 6 m spatial resolution
and the other for the panchromatic band at 1.5 m spatial resolution.
</td>
<td>
XML file
</td> </tr>
<tr>
<td>
TerraSAR-X
_https://www.intelligenceairbusds.com/files/pmedia/pu_
_blic/r459_9_20171004_tsxxairbusds-ma-0009_tsxproductguide_i2.01.pdf_
</td>
<td>
DLF COSAR binary format.
</td>
<td>
XML file
</td> </tr>
<tr>
<td>
WorldView-3 _https://dg-cms-uploadsproduction.s3.amazonaws.com
/uploads/document/file/95/D_
_G2017_WorldView-3_DS.pdf_
</td>
<td>
7 multispectral and 7 SWIR GeoTIFF images
</td>
<td>
XML file
</td> </tr> </table>
### Data Sharing
Data collected from Copernicus program: Sentinel-1 and Sentinel-2 can be
download from Copernicus Open Access Hub.
SPOT6/7 and Pleaides originated data are distributed on commercial licence and
can only be accessed by project partners.
### Archiving and preservation
The datasets mentioned above will be stored on the platforms data server while
project life cycle.
However, due to the commercial license to use certain data, access to certain
data sets will be limited.
# Additional datasets
## Request of additional datasets 2019
CANDELA project has requested Additional datasets for 2019. Table 1 gathers
the information requested by the project in deliverable D5.7 [5]. In addition,
this section provides a brief justification of the use of these additional
datasets by the project use cases. Detailed information about the requirements
of these use cases is provided in deliverables D1.1 and D1.2 [3][4].
**Table 1: Datasets requested for CANDELA [5]**
<table>
<tr>
<th>
**Core dataset**
</th>
<th>
**Code**
</th>
<th>
**Total sqkm**
</th> </tr>
<tr>
<td>
Archive_standard_Optical_VHR1
</td>
<td>
ADD_011a
</td>
<td>
12.600
</td> </tr>
<tr>
<td>
Archive_standard_Optical_VHR2
</td>
<td>
ADD_011b
</td>
<td>
12.600
</td> </tr>
<tr>
<td>
Archive_standard_SAR_VHR1
</td>
<td>
ADD_015a
</td>
<td>
12.600
</td> </tr>
<tr>
<td>
Archive_standard_SAR_VHR2
</td>
<td>
ADD_015b
</td>
<td>
12.600
</td> </tr>
<tr>
<td>
New acquisition_standard_Optical_VHR1
</td>
<td>
ADD_012a
</td>
<td>
8.325
</td> </tr>
<tr>
<td>
New acquisition_standard_Optical_VHR2
</td>
<td>
ADD_012b
</td>
<td>
8.325
</td> </tr>
<tr>
<td>
New acquisition_standard_SAR_VHR1
</td>
<td>
ADD_016a
</td>
<td>
8.325
</td> </tr>
<tr>
<td>
New acquisition_standard_SAR_VHR2
</td>
<td>
ADD_016b
</td>
<td>
8.325
</td> </tr> </table>
### Agriculture and Macro-economics
This use case aims at demonstrating the capacity of remote sensing in
extracting adequate information that can help decision makers to craft
policies that tackle the challenge of urban expansion, which causes shrinking
in agricultural lands. Additionally, remote sensing can also be used for
economical purposes as in the second sub-use case dedicated to estimating
damage in vineyards due to natural hazards. Very high-resolution (VHR)
datasets requested through the Data Ware House (DWH) are mandatory for this
use case for several purposes. For instance, potential users identified the
ability of tracking the evolution of specific land covers such as buildings
and roads as one of the requirements. The latter requirement is non-trivial
and might be impossible to achieve using the free of charge Sentinel-2 and
Sentinel-1 images. Alternative very high-resolution datasets were identified
such as SPOT6/7, Pleiades and TerraSAR-X. Moreover, the developed analytics
techniques in this project have to be generic enough in order to process
remote sensing data acquired using different sensors. The requested VHR
datasets will also be used to test the robustness and generality of the
developed techniques.
### Forest Health Monitoring
In the Forest Health Monitoring use case requirements, potential users
identified their needs concerning mostly forest management and change
detection. Depending on the scale of the study, the information they need has
different accuracy. For areas covering the whole Regional Directorates of the
State Forests, the accuracy for individual forestry parcel is sufficient. For
this purpose, optical data from the Copernicus Program with a resolution of 10
m will be useful. This dataset also allows to detect spots of damage caused by
mechanical or biotic factors. On a larger scale, allowing the identification
of diseases or damage on individual trees, the data with higher resolution
will be useful. This will allow you to receive accurate information for people
dealing with nurturing and forest management.
In addition, very high-resolution data (VHR) can be used to control the
results obtained on those with lesser resolution. Additional datasets were
chosen to have adequate number of infrared and SWIR bands. They allow to
better determine the condition and type of vegetation.
Moreover, in order to process different remote sensing data gathered from
diverse sources and sensors, developed algorithms and techniques ought to be
to some extent adequate. The VHR data should also serve for testing of
developed solutions in robustness and generality.
# Conclusions
This document described the bold lines in the data management plan of CANDELA
project. The description included how to make the project data findable,
accessible, interoperable and reusable. As the CANDELA platform is not
complete yet, more details will emerge in the next DMP deliverable that better
illustrate the mechanism of data standards, storing, sharing, security and
ingestion.
Additionally, the document dedicated a specific section for the datasets that
will be used in the project (for the Macro-economics & agriculture and
Forestry use cases), their description, standards, sharing and storage. The
next DMP deliverable will contain more details about these datasets. For
Sentinel-1 and Sentinel-2 images, more information will be added on the
acquisition dates. Moreover, the next DMP deliverable will be enriched with
the specifications of the commercial data requested through the Data Ware
House. Furthermore, the DMP related to products and datasets produced using
the analytic tools will also be clearer through the project cycle and will be
further discussed in the next DMP deliverable.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1392_AIDA_776262.md
|
AIDA is not actually using DOI for produced data. However, the introduction of
persistent and unique identifiers using service like DataCite
(https://datacite.org/dois.html) will be evaluated during the project for
simulations and enhanced data.
## Making data openly accessible
Different types of storage are available at Cineca: disk, tape, structured db.
Depending on the typology, dimension and the use (internal, external) of data
different solutions can be used. For example, enhanced datasets can be hosted
in a MySql instance. Moreover, a cloud environment based on Openstack is also
available to create virtual machine to set up custom databases installation
(MySql, Postgres, Neo4j, etc.)
In general, Cineca storage is "user oriented": Each user has its own space in
the $HOME filesystem (with back up) and some space in the $CINECA_SCRATCH
filesystem (without back up and for a short time) where to store data for all
the projects he is involved in. Moreover, on demand, a $TAPE filesystem is
available for saving data on magnetic media. However, two "project oriented"
filesystem are available: WORK (for data analysis) and DRES (for data
storage).
For the AIDA project, a DRES directory has been created on request of Aida
project. It is onlystorage resource, based on GSS technology. It is
characterized by:
* an Owner (a user who owns that resource and is allowed to manage it),
* some possible Collaborators (users who can access the resource but not manage it)
* a validity time, an extension and a storage type
* some possible computational Projects (all collaborators of the project can access the resource)
Currently, three main types of DRES are available for AIDAdb initial setup
(Table 6)
_Table 6: Storage resources for AIDAdb._
<table>
<tr>
<th>
_**Name** _
</th>
<th>
_**Description** _
</th>
<th>
_**Available space in TB** _
</th> </tr>
<tr>
<td>
FS
</td>
<td>
a storage area consisting in a normal unix FileSystem object
</td>
<td>
10.0
</td> </tr>
<tr>
<td>
ARCHIVE
</td>
<td>
a storage area for long-time archiving that is actually maintained mainly on
magnetic tape via LTFS technology
</td>
<td>
200.0
</td> </tr>
<tr>
<td>
REPO
</td>
<td>
a more sophisticated data repository for long-time archiving, where data are
described by metadata and different security levels are available. The Data
Repository (REPO) is based on iRODS technology
[1]
</td>
<td>
0.5
</td> </tr> </table>
Results of the numerical simulations will be made available on disk or tape
storage. In particular a description of the numerical code, initial
conditions, boundary conditions as well as the electromagnetic fields data
obtained directly from the selected simulations will be stored. These data
will be immediately accessible to all components of the Consortium and will be
made accessible to people external to the project by the end of the project at
maximum, if possible even before the end. Spacecraft data summarized in the
ICD are public and come from ESA or NASA DB and are accessible to any user
following the ESA or NASA procedures. AIDA will furthermore produce new high-
level data products from the used spacecraft data that will include catalogues
of features and events detected by ML and AI algorithms. These data sets will
also be stored on the Cineca DB and will be made available to the public.
Cineca Data Repository [2] is a service to store and maintain scientific data
sets, built in a way that allows a user to safely back-up data and at the same
time manage them through a variety of clients, such as web browser, graphical
desktop and command line interfaces. The service is implemented over iRODS [1]
(integrated Rule-Oriented Data System), which relies on plain filesystem as
backend to store data, represented as collections of objects, and on databases
for metadata. The service's architecture has been designed to scale to
millions of files and petabytes of data, joining robustness and versatility,
and to offer to the scientific communities a complete set of features to
manage the data life-cycle:
* Upload/Download: the system supports high performance transfer protocols like GridFTP, or iRODS multi-threads transfer mechanism, and large interoperable ones like HTTP.
* Metadata management: each object can be associated to specific metadata represented as triplets (name,value,unit), or simply tagged and commented. This operation can be performed at any time, not just before the first upload.
* Preservation: the long-term accessibility is granted by means of a seamless archiving process, which is able to move the collections of data from the on-line storage space to a tape based off-line space and back, according to general or per-project policies.
* Stage-in/stage-out: the service is enabled to move data sets, requested as input for computations, towards the HPC machines' local storage space, commonly named “scratch”, and backwards as soon as the results are available.
* Sharing: the capability to share single data objects or whole collections is implemented via a unix-like ownership model, which allows us to make them accessible to single users or groups. Moreover a ticket based approach is used to provide temporary access tokens with limited rights.
* Searching: the data are indexed and the searches can be based on the objects location or on the associated metadata.
During the project we will evaluate also the use of EUDAT services B2SAFE and
B2FIND (Cineca is a EUDAT partner). EUDAT Collaborative Data Infrastructure is
a Service-oriented, Community driven, Sustainable and Integrated initiative
developed thanks to FP7 and H2020 European programmes. The aim of EUDAT is to
create a single European e-infrastructure of interoperable data services. It
is based on a network of nodes (computing or data centers across 15 European
nations) that provide a range of services for upload and retrieval,
identification and description, management, replication and data integrity,
plus supplementary services needed to run the infrastructure. EUDAT adopts a
data model and a set of technical standards and policies shared with
scientific communities
[3].
B2SAFE [4] allows one to replicate datasets across different data centers in a
safe and efficient way while maintaining all information required to easily
find and query information about the replica locations. To this end,
information on locations along with other important information is stored in
PID (Persistent identifier) records. It is based on iRODS and an instance is
available in Cineca.
B2FIND [5] is a metadata service providing a discovery portal, which allows
users to find data collections within an international and inter-disciplinary
scope. It is based on a metadata catalogue of research data collections stored
in EUDAT data centers and other repositories. Harmonization of the metadata
descriptions collected from heterogeneous sources enables not only the
presentation in a consistent form but also the faceted search across
scientific domain boundaries.
For the storage of software product AIDA is using GitLab
(https://gitlab.com/aidaspace/aidapy). GitLab is a web-based hosting service
for version control, mostly used for computer code. It offers the
functionality of distributed version control and source code management,
providing several collaboration features like task management, feature
requests and bug tracking for every project.
Projects on GitLab can be accessed using the standard Git command-line and a
web interface. GitLab also allows registered and unregistered users to browse
public repositories.
After registration on Cineca UserDB, users can access data and metadata,
depending on their role in the project (participant, external user...) using:
* Secure Shell (SSH)
* Desktop Based Tool
* Grid Services using GSI-SSH
* Cineca Data repository tool based on iRODS
The access through AidaPy for external users will be allowed without
registration on Cineca Userdb but with app registration. An anonymous access
is also available through iRODS.
Data will remain available at Cineca for 2 years after the end of the project
and solutions for the preservation after this period will be examined by AIDA
project.
## Making data interoperable
To facilitate interoperability we will enumerate the numerical simulations
using different capital letters referring to the physical environment as e.g.,
solar wind, magnetosphere, magnetotail and so on followed by the main physical
process of interest as , e.g. magnetic reconnection, Kelvin-Helmholtz
instability and so on. We will also use the L or E letter to distinguish the
numerical approach, Lagrangian or Eulerian, respectively. The same terminology
will be used for space data. A scientific standard vocabulary will be used for
all data types in the data set.
Data quality assurance and quality control will be maintained throughout the
project, utilizing strategies for [6]:
* Preventing errors from entering a dataset. Typical types of errors include for example (a) incorrect or inaccurate data entered in the repository, or (b) data or metadata not recorded due to human error or anomalies in the field.
* Ensuring quality of data for entered data in order to identify potential data contamination. Typical examples include (a) checking for null values in the data (like NaN etc.) so as to avoid any missing, impossible or anomalous values within the data and (b) making sure that data line up in proper columns.
* Monitoring and maintaining the quality of data during the project in order to prevent data contamination.
Upon uploading data on the AIDApy repository, the user will be forced to run a
local script/procedure that will collect some non-privacy information and
initially check if the data that are going to be uploaded are relevant and
valid. The user will need to include info like:
* Digital identity, in order to know who is entering data to the AIDA repository
* Disclaimer that the data do not contain any harmful item (e.g. virus free)
* Data information like the type, size, number of scenes etc.
Regarding data quality assessment (QA), three aspects are generally found in
remotely sensed imagery and can be included in the AIDApy tool for performing
data QA:
* Radiometric Quality, which is affected by sensor characteristics and detector responses such as striping, drop lines, noise and band missing
* Atmospheric Quality, which is dependent on the circumstances at the imaging time such as cloud cover and haze
* Geometric Quality, which is either dependent on sensor characteristics and also satellite situation such as attitude, position, velocity and perturbations.
AIDApy will study the usability and inclusion of some visual and statistical
analysis tools for performing QA on data entered in the repository [6],
summarized in Table 7:
_Table 7: Visual and statistical tools for QA._
<table>
<tr>
<th>
_**Quality Defect** _
</th>
<th>
_**Visual diagnosis** _
</th>
<th>
_**Statistical Diagnosis** _
</th> </tr>
<tr>
<td>
Striping
</td>
<td>
Different overall brightness of adjacent lines
</td>
<td>
Significantly different variance and mean of adjacent lines
</td> </tr>
<tr>
<td>
Drop Line
</td>
<td>
Null scan line
</td>
<td>
Zero variance of a line
</td> </tr>
<tr>
<td>
Noise
</td>
<td>
Dark and bright points at the background
</td>
<td>
Radiometric anomalies
</td> </tr>
<tr>
<td>
Band Missing
</td>
<td>
Lack of data in a band
</td>
<td>
Zero variance of a band
</td> </tr> </table>
# Increase data re-use
During the project AIDA consortium will choose an OpenSource license for data
access through the AIDApy software.
For numerical simulation data the access will be provided by Cineca based on
single user request. The high level data that will be produced using ML and AI
algorithms will be also made available to the public on the basis of a single
user request.
Data generated by AIDA will be also reused in other scientific national or
international projects or existing databases. An example might be ingesting
into ESA science archives a number of AIDA high-level products obtained from
both ESA data and numerical simulations. The focus would be on ESA
Heliophysics Science Archives of completed or long-duration missions such as
the Ulysses
Final Archive [7], the Cluster and Double Star Science Archive [8] and the
SOHO Science Archive [9]. During the AIDA project, discussions will be carried
out with ESA and an agreement will be pursued between the project’s PI and ESA
to manage the interfacing between AIDA’s and ESA’s databases. This will happen
through a relevant ICD (Interface Control Documents) consisting of the rules
for pushing-pulling data, together with lists of metadata to be ingested in
ESA databases. Following the ICD specifications, Cineca will create an archive
including the project’s results in order to make as easy as possible the data
ingestion in the ESA archives. Cineca will create the procedures to load the
project’s data into the ESA archives.
# Allocation of resources
There are no immediate costs anticipated to make the data produced FAIR. Data
will be processed using the tools developed in the AIDA framework and stored
in Cineca systems. Parts of the AIDAdb relevant to specific European missions
will be migrated also to the ESA database. The databases created by AIDA
(AIDAdb) will be public and interfaced with community-based databases such as
the databases maintained by ESA.
For documentation, the EUDAT service B2DROP is used. B2DROP [10], based on
ownCloud, is a user-friendly and trustworthy storage environment, which allows
users to synchronize their active data across different desktops and to easily
share this data with peers. The service is directed towards all European
scientists whose institute does not host such storage.
# Data security
For the duration of the project, data will be stored on the Cineca storage
protected against unauthorized access by means of standard Cineca
authentication. Appropriate access levels will be granted by definition of
roles in the project.
# Ethical aspects
There are no ethical issues in the generation and analysis of simulations and
solar data. There are no human subjects or samples involved. Personal data of
project participants will be treated following GDPR through the Cineca Userdb.
# Long term plans
AIDA will continually revise the present plans to take opportunity to the
evolving scenario on scientific databases and in particular databases for
space sciences. As the project will mature, we plan to use every opportunity
to seek continuity to the activities of both AidaPy and AIDAdb. We envision
AidaPy to become a community open software package for python that will
continue to expand and be refined from the contributions of the community of
users. AIDAdb, instead, requires resources of hardware and that requires
financial support. We will seek opportunities in future EU and ESA plans for
the continuation of a database of simulation results that will retain long
term the data of published results, of key simulations and will expand with
new input from the community.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1393_GAUSS_776293.md
|
# Introduction
This document describes the data management life cycle for the data to be
collected processed and/or generated by GAUSS, since it postulates as a
beneficiary of the Open Research Data Pilot (ORD pilot); it has been developed
following the H2020 guidelines and template [1].
It includes information on: the handling of research data during and after the
end of the project; what data will be collected, processed and/or generated;
which methodology will be applied; whether data will be shared/made open
access; and how data will be curated and preserved (including after the end of
the project).
The Data Management Plan (DMP) is intended to be a living document in which
information can be made available on a finer level of granularity through
updates as the implementation of the project progresses and when significant
changes occur; therefore this is a first version and it may be updated should
the need arise [2].
The ORD pilot aims to improve and maximise access to and re-use of research
data generated by Horizon 2020 projects and takes into account the need to
balance openness and protection of scientific information, commercialisation
and Intellectual Property Rights (IPR), privacy concerns, security as well as
data management and preservation questions. According to European Commission
figures [3] approximately an average of 67% of H2020 proposals opted in this
initiative in the period 2014-2016.
Research data will be findable, accessible, interoperable and reusable (FAIR),
to ensure it is soundly managed. Good research data management is not a goal
in itself, but rather the key conduit leading to knowledge discovery and
innovation, and to subsequent data and knowledge integration and reuse.
Several studies indicate that openness increases citations [4], which will be
a key asset for dissemination of GAUSS’ outcomes and “openness also improves
reproducibility of your research results – and it might introduce new and
perhaps unexpected audiences to your work” [5].
Nevertheless, sharing some information and data could be harmful not only for
the consortium and its members but also for drone operations. This is why,
although general guidelines are explained in this document, a consortium
decision 1 will be made in an individual basis before sharing any specific
information outside the consortium and/or making some information public,
besides the deliverables that were stated as public in the Project Management
Plan [6].
Some H2020 projects contemplate data management as one of the goals of the
project, for example to create or populate a database ( [7], [8]) or to
perform statistical studies and correlations ( [9] [10] [11]). On the other
hand, in this project data is used as a mean to develop the GAUSS system and
therefore the amount of data and information to publish and distribute at the
end is reduced and it will probably be focused on the form of written reports
and articles; nevertheless, other information is likely to be also published,
as it is explained in this document.
## Structure of the document
The current Section 1 introduces the document and briefly describes the
information included.
Section 2 summarises the main type of data that GAUSS will work with.
Section 3 describes the procedures that will be followed to ensure that the
main research GAUSS data is as FAIR 2 as possible.
Section 4 describes how the costs related to data management will be assumed
within the project.
Section 5 deals with data security issues.
Section 6 briefly describes potential ethical aspects
# Data summary
GAUSS will both collect and generate information; the former is essential to
have a proper view and knowledge of current technologies: UTM operation (WP2),
Positioning (WP3), Integrity and Security (WP4) and UTM technologies (WP5).
Furthermore, outputs from GAUSS will include technical reports (D3.2, D4.2,
D5.2, etc.) and also operational definition and requirements (D2.1, D2.4,
etc.).
This data generation will allow defining RPAS systems which will improve
precision, safety and security and will optimize their usability by improving
their coordination and enhancing their compatibility; these concepts are fully
aligned with the project objectives.
The most common format will be reports in the form of .pdf files and
datasheets, since they are an easy way to present post-processed data. GAUSS
includes both unitary and laboratory tests together with field trials, these
activities will generate huge amount of raw data in several formats: pictures
and videos for visual proofs, .txt/.csv files for GNSS locations, .kml for
wide use of geoposition information, etc.
Most data obtained during the unitary tests will be used to update and modify
the different systems that will be developed in order to fulfil GAUSS
objectives and furthermore successfully perform the final field trials.
Data will come from different sources depending on whether it is collected or
generated:
❑ Collected data:
* GAUSS partner knowledge
* Proactive research
* Feedback from AB and stakeholders.
* Open source code ❑ Generated data:
* GAUSS partner internal development
* Tests
* Field trials
Due to the complexity of the project and its duration collected and especially
generated data will be of considerable size. However, most of this size will
be due to raw experimental data (experiments and trials generate huge amounts
of data that need to be processed to elaborate results) and an effort will be
made to summarize such data to make it human-understandable and to eliminate
irrelevant or repetitive data that do not add value to the project, in order
to keep overall data size to a minimum.
## Collected data
In order to develop GAUSS systems, each partner will gather data; this kind of
information will be managed by each partner individually and it is not
expected to be shared/published outside the consortium on behalf of GAUSS.
Therefore, this report does not focus on these type of data.
Some examples and sources of these type of data are papers,
journals/magazines, webpages information, other R&D projects 3 , etc. and
the most common formats are: .pdf and other office formats.
## Working data
This type of data will be generated by different partners within the
consortium in order to design/develop the GAUSS systems they are responsible
for. Similar to the previous type of data, working data is not expected to be
shared/published outside the consortium on behalf of GAUSS. Therefore, this
report does not focus on these type of data.
Some examples of these type of data are assessment reports, datasheets,
minutes, etc. and the most common formats are: .pdf and other office formats.
## Generated data
Finally, GAUSS will generate some data as outcome and results of the project,
some of these will become relevant information which will be of public domain
and therefore published, some information will only be shared among relevant
stakeholders (advisory board and other H2020 projects) and other information
will be confidential and will not be published; this report focuses on the
former two.
Several type of files are expected for this purpose:
* Reports in the form of .pdf files. Some of GAUSS reports will be public, according to the
GAUSS PMP (Project Management Plan) [6]: D2.1-Design of UTM Concept of
Operations; D2.2-Definition of UTM Scenarios and use cases report; D4.1-Report
on EGNSS securityenabling features relevant for RPAS; D6.4-First trials
results report; D6.5-Second trials results report; D6.6-Performance results
analysis and conclusions.
* Presentations in the form of .pdf files; for example for GAUSS workshops or public events.
* Telemetry files with information of location and parameters to be assessed for each flight, mainly flight plans and 4D trajectories, GNSS signal integrity, GNSS signal security and security of UTM communications to name a few) 4 . This is expected to be a .csv file in order to increase dissemination potential since they are easy to work with and manipulate [12] 5 . The main source of these type of files will be the field trials where it is expected that for each flight of each aircraft a series of three telemetry files will be generated:
* Information coming from the current positioning system.
* Information coming from the GAUSS positioning system.
* True location information (coming from RTK or DGPS).
Telemetry files will also be generated by means of a simulator (Gazebo
Simulator).
_Figure 1: example of a telemetry csv file_ 6
Overall GAUSS data will be useful to RPAS industry and ATM/UTM/U-space
community and lowlevel technical information will be useful for industries
related to RPAS such as OEM and R&D centres that could build their
developments on GAUSS information; other (present and future) R&D projects
(including H2020) could use this information like “Collected data” to design
and develop their systems.
# FAIR data
## Making data findable, including provisions for metadata
Files produced during the project will be tagged accordingly to ease their
discovery and access; furthermore, since GAUSS involves a high level of
experimental activities, raw data will be generated and their metadata
associated will also be managed.
In order to maximise accessibility to GAUSS results a straightforward naming
convention and keywords will be used together with a coherent and detailed
versioning; such keywords will include "European Union (EU)", "GSA",
“Galileo”, “EGNOS”, “GNSS”, “UTM”, “UAS”, “U-Space”, “satellite navigation
system”; the name of the action, acronym and grant number; the publication
date, and length of embargo period if applicable, and a persistent identifier
(DOI). DOI identifiers will be used to cite reports and data sets, they are
supported by most file repositories and its implementation will ease the
process of identifying content in addition to provide a persistent link to its
location on the Internet. The naming convention used for deliverables is the
following (following the internal nomenclature of GAUSS):
GAUSS-D _ <Deliverable code> _ \- _ <Deliverable name> _ -v _ <VV> _
where: <VV> is the Version number of the delivery. The naming convention in
place instructs to use underscores “_” instead of blank spaces “ “ and avoid
special characters such as “~ ! @ # $”. Where relevant, date designators will
be used with the following format: YYYYMMDD. This format makes sure that all
files can be sorted in chronological order.
In case metadata information is useful to be included in the files, several
standards will be assessed focusing on generic standards since a specific
standards for GAUSS metadata files has not been found; an initial assessment
has been performed, see Table 1.
<table>
<tr>
<th>
**Metadata standard**
</th>
<th>
**Brief description**
</th> </tr>
<tr>
<td>
**Data Package**
</td>
<td>
Generic wrapper format for exchanging data.
</td> </tr>
<tr>
<td>
**ISA-Tab**
</td>
<td>
General purpose to collect and communicate complex metadata
</td> </tr>
<tr>
<td>
**PREMIS**
</td>
<td>
Defines a set of metadata that most repositories of digital objects work with
</td> </tr>
<tr>
<td>
**ISO 19115**
</td>
<td>
Schema used for describing geographic information and services
</td> </tr>
<tr>
<td>
**Dublin Core**
</td>
<td>
Basic, domain-agnostic standard and one of the most widely used as a metadata
standard. It is included in the OpenAIRE guidelines [13].
</td> </tr> </table>
_Table 1: potential standards for metadata [14]._
## Making data openly accessible
Some deliverables will be publicly available (D2.1, D2.2, D4.1, D6.4, D6.5 and
D6.6) and other project information might be made public if agreed by all
partners and PO. In this sense, some GAUSS partners (especially research
centres) will publish scientific/technical publications 7 regarding their
field.
GAUSS results that are to be publicly available will be made accessible via
Zenodo (see Section 3.2.1 for further information), an open source repository,
although publication in other directories will also be encouraged. EVADS will
manage the upload and submission of the files to Zenodo although such files
are to be provided by the partner responsible of the task associated to each
file within the scheduled deadlines.
Some software implementations, especially regarding WP5, will be made openly
available by USE in Github.
Furthermore, in order to maximize project dissemination, some results are
likely to be published in scientific journals and GAUSS website, see [15] for
a potential list of scientific journals and public reports. EVADS will manage
the upload and submission of the deliverables to the website although they are
to be provided by the partner responsible of the task associated to the
deliverable within the scheduled deadlines.
### 3.2.1 Zenodo
Zenodo [16] is an interdisciplinary open research data repository service
built and operated by CERN and OpenAIRE that enables researchers from all
disciplines to share and preserve their research outputs, regardless of size
or format.
Other repositories were assessed [17] and Zenodo was chosen for its
simplicity, its wide use (Figure
2 shows how Zenodo is widely used among H2020 projects) some partners’
previous experience 8 and their relation with CERN and OpenAIRE.
Furthermore, Zenodo also contemplates the use of API.
A persistent identifier (DOI) is issued to every published record on Zenodo.
This is a top-level and a mandatory field in the metadata of each record which
helps to make uploads easily citable.
In order to upload data to Zenodo GAUSS consortium has created a project
account and all files to be made public will be submitted through this account
to maximize dissemination and project awareness. Files may be downloaded
directly without the need an account, which will increase dissemination reach.
Files uploaded to Zenodo have a size limit of 50 Gb (GAUSS files will be much
lighter that this) and will be retained online for the lifetime of the
repository, which is the same than for the CERN laboratory (the host) and it
has an experimental program defined for the next 20 years at least [18].
Metadata is exported in several standard formats such as MARCXML, Dublin Core,
and DataCite Metadata Schema [16].
Restricted access can be configured on Zenodo although this is not expected to
be used since the repository will be used to upload public information.
Furthermore, Zenodo provides analytics of uploaded information which will be
used for analysing dissemination impact, [19] and [15].
_Figure 2: Top 20 data providers for H2020 publications [20]._
## Making data interoperable
As it has been explained in Section 2, data to be published will be in very
common formats so it is expected that they will be easily used by external
parties (researchers, institutions, organisations, etc.).
Standards could be used at some times for specific files and information
summarised in reports will also include a detailed glossary of the specific
notations used along the document to ensure concepts are clearly defined.
## Increase data re-use
In order to maximize information reach and ease their access, GAUSS data
publicly available will be published under “Attribution-ShareAlike 4.0
International (CC BY-SA 4.0)” 9 as soon as possible and at the latest of
publication.
One of the main goals of GAUSS’ is to help in defining future drone
regulations so it is expected that most of GAUSS’ outcomes will be further
used.
Data to be published will be subject to quality assurance procedures (further
explained in [6] and [21]).
# Allocation of resources
Most of this FAIR initiative will not involve direct costs such as licenses,
software or hardware since open source third parties servers and technologies
will be used to host public data (see Section 3.2). Nevertheless, it will
require spending time in managing such data (receiving, selecting, adapting,
uploading, etc.) which has been contemplated within the management activities
of both the project (WP1) and the dissemination activities (WP7). Although the
final upload to the repository and/or website will be performed by EVADS, each
partner is to deliver the required files to be uploaded within the agreed
deadlines.
Some activities with a direct cost associated have been contemplated (such as
publications in scientific journals under payment, attendance to specific
events, etc.) in order to maximize GAUSS dissemination. This cost has already
been contemplated within travel and other goods/services costs associated to
each partner and it has been specified in the Grant Agreement.
# Data security
As it has been explained in Section 2, GAUSS will work with different type of
data and it may be classified into some groups:
* Internal data: this is the data that each partner works with individually within the framework of GAUSS (reports, papers, software algorithms, etc.). Each partner has measures and procedures implemented within their company (secure storage, periodic backups, malware detection, etc.) to ensure CIA triad (Confidentiality, Integrity and Availability) 10 .
* Transferred data: this is the data/information that is transferred among the consortium within the framework of GAUSS, for example deliverables preliminary versions, specification documents, field trials telemetry, etc. As it was defined in the Grant Agreement, the highest level of security of the information within GAUSS is “Confidential”, which is not as restringing as other levels (such as Classified).
* Published data: this is the data/information that is published in the website and/or the repository. As it was defined in [6] the former will be operative during the project duration and information on how the repository manages the information is included in Section 3.2. The repository allows for restricted access to specific files, but this feature is not considered relevant at this point since all the information published there will be public for dissemination purposes.
# Ethical aspects
No ethical or legal issues are expected regarding data sharing since no
private data is expected to be included in the information that will be
published. Whenever such data is gathered (pilot data, serial numbers, etc.)
it will be omitted or anonymized before submitting it.
Furthermore, information that has been obtained through external means (for
example, AIS data and FIS data) will not be shared unless explicit agreement
from their provider is obtained even when it does not contain private data.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1394_TransSec_776355.md
|
D1.10 Data Management Plan
**Open Research Data Management Plan (DMP) describes the data management life
cycle for the data to be collected, processed and/or generated by a Horizon
2020 project.**
**Topic:**
Reference trajectories for digital road map evaluation
**Related Publication:**
Zhang, L.; Wang, J.; Wachsmuth, M; Gasparac, M.; Trauter, R.; Schwieger, V.:
Role of Digital Maps in Road Transport Security. FIG Working Week 2019, Hanoi,
Vietnam.
**Description of open access research data:**
147 km of precise GNSS trajectories are available. They have a sampling rate
of 1 Hz and were generated with a Leica Viva GS15 receiver (processing of
phase data; accuracy better than 10 cm). The positions and their standard
deviations are provided in an ASCII Format. The trajectories are organized as
follows: 17.3 km German Motorway, 50.2 km motorway entrance and exit ramps,
79.5 km urban areas. The data include a time stamp in GPS-time, the
2-dimensional positions in North, East (UTM) and additionally the ellipsoidal
height as well as the respective standard deviations. Additionally the
information on which lane the reference was generated is given. Position
information where the lane assignment is not clear (e.g. lane changing) or
where GPS outliers occurred are marked too.
The amount of data is around 2.7 MB.
**Planned Date of publication:**
30th June, 2020
**Access Provider:**
Downloadable from the Webpage of Institute of Engineering Geodesy, University
of Stuttgart ( _https://www.iigs.unistuttgart.de/_ ) , via Web-Interface on
request (due to confidentiality). Email: [email protected]_
**Public Deliverable 4 TransSec**
D1.10 Data Management Plan
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1397_SIA_776402.md
|
# 4 Introduction
SIA is a Horizon 2020 project co-funded by GSA. This project is part of the
_Open Access to_ _Scientific Publications and Research Data Programme in
H2020_ . The goal of the program is to foster access to data generated in
H2020 projects.
Open Access refers to a practice of giving online access to all scholarly
disciplines information that is free of charge to the end-user. In this way
data becomes re-usable and the benefit of public investment in the research
will be improved.
As described in Figure 2, the EC provided a document with guidelines for
projects participants in the pilot. The guidelines address aspects like
research data quality, sharing and security. According to the guidelines,
projects participating will need to develop a Data Management Plan.
The purpose of the DMP is to provide an overview of the main elements of the
data management policy that will be used by the consortium with regard to the
project research data. The DMP is not a fixed document but will evolve during
the lifespan of the project.
The DMP covers the complete research data life cycle of the SIA project. It
describes the types of research data that will be generated during the
project, the strategies on research data preservation and the provision on
access rights. The research data should be “FAIR”, that is findable,
accessible, interoperable and re-usable. These principles precede
implementation choices and do not necessarily suggest any specific technology,
standard or implementation solution.
Sharepoint Site (partners)
**Zenodo**
**Repository**
**/**
**SIA**
**Website**
**Figure 1. SIA data sharing policy**
In the context of research funding, open access requirements do not imply an
obligation to publish results. The decision to publish is entirely up to the
grant beneficiaries. Open access becomes an issue only if publication is
chosen as a means of dissemination. Moreover, open access does not affect the
decision to exploit research results commercially, e.g. through patenting. The
decision on whether to publish through open access must come after the more
general decision on whether to publish directly or to first seek protection.
The policy for data sharing in SIA project is sketched in Figure 1, which
includes different access levels for consortium members and for external
users:
* Data Sharing and Access for Project Partners: all the information and sifgnificant research data related to the SIA project will be shared through a _Sharepoint Private Site_ to which only project partners can access.
* Data Sharing and Access for External users (Open Access):
* _Project Website_ : will be a key in supporting the project communication to the general public and the project stakeholders during the project lifetime.
* **Zenodo** Repository: this repository will host the Open Access Research Data both during the project lifetime and after the project ending.
More information concerning the project website and the **Sharepoint** can be
found in the deliverable D1.1 of the project (Project Management and Quality
of Assurance Plan) [1], which is published in the website.
**Figure 2. H2020 Open Data Access from H2020 rules**
## 4.1 SharePoint as internal and private sharing tool
As detailed in the Quality Assurance Plan (Deliverable D1.1), in order to
facilitate efficient internal communication among partners, a Microsoft
SharePoint server has been set up. This site will be used to manage and
coordinate the project; its access is restricted to the partners of the
Consortium. This site can be accessed following the menu item that is provided
in the web page entitled “Extranet”. Additionally, a valid user name and
password are needed. These are notified to users separately by the PMO.
SharePoint is a web-based collaboration and document management platform from
Microsoft. It will be used during the Project to share workspaces and
documents. Important project files will be stored and maintained on
SharePoint, furthermore a folders structure has been created.
**Figure 3. Home page of SIA private site**
## 4.2 SIA Website
The SIA project website ( _www.siaproject.eu_ ) is a reference point and
anchor for SIA online content and outreach activities. It will explain the
context, developments and ambitions of the project to our stakeholders and the
general public. In order to keep a continuous and current information flow
from the project to the public at large, the website will contain articles
about SIA topics, interviews with experts from within and outside the project,
and press releases about project highlights.
The SIA website will be a key in supporting the project communication to the
general public and the project stakeholders.
**Figure 4. SIA website**
## 4.3 Zenodo repository
The repository Zenodo [2] has been chosen as the main repository to store,
classify and provide open access to the stored data objects originated within
the SIA project frame.
_Zenodo_ is an open, dependable repository for all of scholarship, enabling
researchers from all disciplines to share and preserve their research outputs,
regardless of size or format. The main features of Zenodo that makes it a
suitable tool for data sharing and preserving are:
* Zenodo is linked to Horizon2020 projects and all results are immediately linked to OpenAIRE and the EC portal.
* Share and link research: Zenodo provides a rich interface which enables linking research outputs to datasets and funding information. All open content is harvestable via OAI-PMH by third parties.
* Supports versioning: Via a top-level DOI you can support all the different versions of a file.
* Trusted, reliable, safe: Data is stored at CERN, which has considerable knowledge and experience operating large scale digital repositories. Data files and metadata are kept in multiple online and offline copies.
* Reviewing: Research materials can set to share with reviewers only, if needed.
# 5 Data summary
Figure 5 shows the architecture diagram of SIA, as described in deliverable
D2.2 of the project [3]. The different interfaces through which data will
circulate are depicted as follows:
* External interfaces (both input and output) that support the functionality described in D2.1. are depicted in red.
* Internal interfaces that are derived from the design of the architecture are depicted in blue.
**Figure 5. SIA system architecture**
Relevant information on the input/output data hereby below described, shall be
subjected to anonymization and/or confidentiality when necessary, especially
when declaring IPR issues.
The input / output data that will circulate through the external interfaces is
briefly summarized:
* **IF1: Operations data** . This information will be provided by the end-users of the consortium (FGC and OBB). It will include o Infrastructure characteristics (GIS map of the lines, composition of the infrastructure, location of assets, etc.)
* Vehicle characteristics (constructive parameters, Multibody models, etc.) o Service and operational characteristics (list of vehicles and train compositions, timetables, speed profiles, loading curves, etc.)
* **IF2: Maintenance procedures** . This information will be provided by the end-users of the consortium (FGC, OBB, VIAS, TELICE). It will include o Catalogue of failures associated to the relevant components
* KPIs that are associated to the health status of relevant components o Limits / Thresholds that assess the health status of relevant components o Maintenance actions associated to relevant components
* **IF3: Auscultation raw data** . This interface contains data coming from measurement devices that imply physical contact (e.g. auscultation train, etc.). The number of parameters, their nature and format of data are dependent on the asset under measurement (i.e. catenary, pantograph, wheelset, rail). Raw data will be stored if relevant in accordance to the Commission rules to use it (eventually and depending on IPR) in future potential related projects, and also to understand post-processed data or summarised information for analysis and post-analysis of the information leading to diagnosis.
* **IF4: Inspection raw data** . This interface contains data coming either from measurements (that do not imply physical contact) and/or inspection forms. The number of parameters, their nature and format of data are dependent on the asset under inspection (i.e. catenary, pantograph, wheelset, rail). Raw data will be stored in the same way as mentioned above.
* **IF5: Ambient** . This interface will contain different datasets that correspond to the physical magnitudes that will be measured by the on-board sensors (e.g. accelerations, displacements, forces, etc.).
* **IF6: EGNSS systems** . This interface computes data coming from EGNSS systems: GNSS signals, inertial signals and digital map.
* **IF7: Asset status** . SIA system will generate data related to the health status of the relevant components. It will include o Historic auscultation/inspection data o Current status of relevant components o Predicted status of relevant components
* **IF8: Early detection of component failure** . This interface will contain data that, based on the future (i.e. predicted) status of relevant components, and the data from IF2, will contain messages with information about early detected failures.
* **IF9: Maintenance recommendations** . This interface will contain data that, based on the future (i.e. predicted) status of relevant components, early detected failures (IF8), and the data from IF2, will contain messages with information about maintenance recommendations.
* **IF10: External interfaces** . This interface will include data that will summarize the current status of relevant components, the early detection of component failures and associated maintenance recommendations.
The input / output data that will circulate through the internal interfaces is
briefly summarized:
* **IF_PANT_DH** : It is used for transmitting condition relevant features and/or raw data as well as status information on sensor performance from the Pant sub-system to SIA_DH.
* **IF_ABA_DH** : It is used for transmitting condition relevant features and/or raw data as well as status information on sensor performance from the SIA_ABA sub-system to SIA_DH.
* **IF_POS_PANT** : Used for synchronisation of SIA- PANT sensors with other sensing nodes within the vehicle.
* **IF_POS_ABA** : Used for synchronisation of ABA sensors with other sensing nodes within the vehicle.
* **IF_POS_DH** : It is used for outputting positioning related measurements or final positions to SIA_DH, depending on the approach taken as discussed above. It also is used to pass on configuration information to the GNSS receiver and/or positioning engine if applicable.
* **IF_DH_VP** : Receives the data in SIA_DH generated by the different subsystems and sends data introduced in the system through SIA_VP required by the rest of subsystems
* **IF_CDM_VP** : Sends input data to SIA_CDM and receives the calculated output data from SIA_CDM.
In every case, the template of datasets from partners acquiring and sharing
these data will cover “Origin of the data”, “expected size”, “format”,
“reusing of data”.
Every dataset and working document will be properly referenced to a version
control number, where major versions are attained only when sending them for a
milestone of the task.
The types and formats of data within the project frame include the following:
* **Laboratory data** : datasets (*.txt, *.doc, *.docx,*.xls, *.xlsx, hdf5, SensorML, etc.), multimodal measurements (*.txt, *.doc, *.docx,*.xls, *.xlsx), numerical data (*.XX), qualitative data (*.txt, *.doc, *.docx), data statistics (*.xls, *.xlsx), images (*.jpg, *.png, *.jpeg, *.tiff), videos (*.Mp4, *.mov), geographical information (*.kml, *.gpx).
* **Fusion data** : statistics (*.xls, *.xlsx), graphs (*.ogg, *xls, *.xlsx), bibliography (*.enl), code and executables (*.rpm, *.exe, *.c, *.cpp, *.py, *.java)
* **Scientific texts** : manuscripts and reports (*.doc, *.docx, *.pdf), publications (*.doc, *.docx, *.pdf), conference proceedings (*.doc, *.docx, *.pdf), conference presentations and posters (*.ppt, *.pptx, *.pdf), books and theses (*.doc, *.docx, *.pdf).
* **Operational data** : inspection forms, maintenance sheets and procedures, operations planning, timetables, vehicle-related data, infrastructure-related data, etc. All the documents will come in the form of *.doc, *.docx, *.pdf, *.jpg, *.png, *.jpeg, *.ppt, *.pptx, *.xls, *.csv.)
* **Dissemination material** : leaflets and fact-sheets (*.pdf), images (*.jpg, *.png, *.jpeg, *.tiff), animated images (*.gif), videos (*mp4), social network publications and website (*.html), presentations and templates (*.ppt, *.pptx, *.pdf).
* **Management documents** : deliverables (*.doc, *.docx, *.pdf), patents (*.doc, *.docx,
*.pdf).
# 6 Fair data
## 6.1 Making data findable, including provisions for metadata
### 6.1.1 Metadata provision
Zenodo repository offers the possibility to assign several metadata to all
uploads, in order to make the content findable. The tags Zenodo offers are:
* Publication type (journal article, presentation, book, thesis, etc.)
* Title, authors, affiliation
* Description of the content
* Communities that the data belong to
* Grants which have funded the research
* Identifiers (DOI, ISSN, PubMed ID, URLs, etc.)
* Contributors
* References
### 6.1.2 Identifiability of data
Zenodo assigns all publicly available uploads a digital object identifier
(DOI) to make the upload easily and uniquely citable. If the upload already
has a DOI assigned, it can be detailed in the metadata provision.
All data generated under the SIA project will acknowledge the grant in the
following way:
**“ _This project has received funding from the European Union’s Horizon 2020
research and innovation programme and from the European Global Navigation
Satellite Systems Agency under grant agreement No 776402_ ”. **
Plus, they will automatically be associated to the project via _OpenAIRE_
portal.
### 6.1.3 Keywords
All uploads will include a group of relevant keywords in order to facilitate
the identification of the results.
### 6.1.4 Clear versioning
Zenodo repository provides a new feature to handle versioning: DOI versioning
allows to edit/update the record’s files after they have been published, cite
a specific version of a record and cite all of versions of a record.
When an upload is published on Zenodo for the first time, Zenodo registers two
DOIs:
* a DOI representing the **specific version** of the record.
* a DOI representing **all versions** of the record.
Afterwards, a DOI is registered for every new version of the same upload.
## 6.2 Making data openly accessible
### 6.2.1 Which data will be made openly available?
__Scientific Publications_ _
Article 29.2 of the model grant agreement sets out detailed legal requirements
on open access to scientific publications: under horizon 2020, each
beneficiary must ensure open access to all peerreviewed scientific
publications relating to its results. Therefore, all the scientific
publications originated by the SIA project will be made openly accessible;
gold or green Open Access Publishing.
There is a list of journals with that status in “www.ieee.org/go/journals”
following the description “IEEE Hybrid Open Access Journals”.
__Other Research Data_ _
Additionally, any other research data or information that might be publishable
will also be made openly accessible. However, any dissemination data linked to
exploitable results will not be put into the public domain if they compromise
their commercialization or have inadequate protection.
### 6.2.2 How the data will be made available
The open access mandate comprises 2 steps: depositing publications in
repositories and providing open access to them. The SIA project will fulfil
these two steps by uploading the data to the Zenodo repository.
### 6.2.3 Methods or Software tools needed to access the data
As a general rule, the format of the data deposited in Zenodo repository will
enable the access to them through standard software tools like Adobe Acrobat
Reader or Microsoft Office Package.
For data formats that cannot be opened using standard software tools, reliable
information on the tools required to validate the results will be provided
with the data.
### 6.2.4 How access will be provided in case there are any restrictions
As detailed in previous sections, any dissemination data linked to exploitable
results will not be put into the public domain if they compromise their
commercialization or have inadequate protection. In this case, the scientific
committee of SIA will individually analyse and decide on the particular access
and time restrictions for each result.
## 6.3 Making data interoperable
SIA will encourage the use of standard vocabularies for all data types present
in the data sets to allow inter-disciplinary interoperability. In case this is
not possible for a specific data set, project partners will provide mappings
to more commonly used vocabulary.
## 6.4 Data re-use
### 6.4.1 How the data will be licenced to permit the widest reuse possible
Data re-use is subjected to the license under which it is deposited on Zenodo.
The Steering Management Committee (SMC) will decide on the specific license
that applies to each data deposited, taking into account the exploitability of
the results.
### 6.4.2 When the data will be made available for re-use
The data will be available for re-use immediately after deposition on Zenodo
Repository.
Scientific publications will be uploaded to Zenodo as soon as they are
published by the editorial and, at the latest, six months after publication.
Other research data not linked to scientific publication will be uploaded to
Zenodo following the instructions of the SMC.
### 6.4.3 Third-parties and re-usability
The data uploaded to Zenodo, as they are deposited on a free-access base, can
be re-used by third parties.
### 6.4.4 Data quality assurance process
Data quality assurance is performed by the Zenodo Repository. In particular,
for each file uploaded, two independent MD5 checksums are stored. One checksum
is stored by Invenio and used to detect changes to files made from outside of
_Invenio_ . The other checksum is stored by _EOS_ and used for automatic
detection and recovery of file corruption on disks.
### 6.4.5 Period for which the data will remain re-usable
All the files uploaded to Zenodo will remain re-usable for the lifetime of the
repository, which is the lifetime of the _CERN_ .
In case of closure of Zenodo, the best efforts will be made in order to
preserve the data in an alternative repository.
# 7 Allocation of resources
## 7.1 Costs for making data FAIR
The costs for making data FAIR are mainly those related to the cost of Open
Access to Scientific Publications, as the use of Zenodo Repository is free of
charge.
Costs related to open access to research data in Horizon 2020 are eligible for
reimbursement during the duration of the project under the conditions defined
in the H2020 Grant Agreement.
## 7.2 Responsibilities for data management in SIA project
Any member of the Consortium can upload content in the repository taking into
consideration committed data quality, naming conventions, etc. (see section
5).
## 7.3 Costs and potential value of long-term preservation
Long term preservation of SIA open access research data will be based on
Zenodo Repository, which is free of charge.
The Steering Management Committee of SIA will decide on long-term preservation
of research data associated to exploitable results. This will be done during
the project lifetime and based on the protection strategy followed by the
consortium.
# 8 Data security
## 8.1 Data storage
### 8.1.1 SharePoint
All the data of the project will be stored in the SharePoint site of the
project.
Microsoft Sharepoint uses, according to Microsoft Corporation, some of the
strongest, most secure encryption protocols in the industry to provide a
barrier against unauthorized access to data. When data is at rest two types of
encryption are used: disk encryption and file encryption. On disk encryption
level, a BitLocker is used to secure data and on file encryption level every
file is secured with its own key that uses Advanced Encryption Standard (AES)
with 256-bit keys, which is a Federal Information Processing Standard (FIPS)
140-2 compliant.
### 8.1.2 Zenodo
##### 8.1.2.1 Data storage
All files uploaded to Zenodo are stored in CERN’s EOS service [4] in an 18
petabytes disk cluster. Each file copy has two replicas located on different
disk servers.
For each file two independent MD5 checksums are stored. One checksum is stored
by Invenio [5] and used to detect changes to files made from outside of
Invenio. The other checksum is stored by EOS and used for automatic detection
and recovery of file corruption on disks.
Zenodo may, depending on access patterns in the future, move the archival
and/or the online copy to CERN’s offline long-term tape storage system
_CASTOR_ in order to minimize long-term storage costs.
EOS is the primary low latency storage infrastructure for physics data from
the Large Hadron Collider (LHC) and CERN currently operates multiple instances
totalling 150+ petabytes of data with expected growth rates of 30-50 petabytes
per year. CERN’s CASTOR system currently manages 100+ petabytes of LHC data
which are regularly checked for data corruption.
Invenio provides an object store like file management layer on top of EOS
which is in charge of e.g. version changes to files.
##### 8.1.2.2 Metadata storage
Metadata and persistent identifiers in Zenodo are stored in a PostgreSQL
instance operated on CERN’s Database on Demand infrastructure with 12-hourly
backup cycle with one backup sent to tape storage once a week. Metadata is in
addition indexed in an Elasticsearch cluster for fast and powerful searching.
Metadata is stored in JSON format in PostgreSQL in a structure described by
versioned JSONSchemas. All changes to metadata records on Zenodo are versioned
and happening inside database transactions.
## 8.2 Data transfer
All data exchanges will be performed throughout web services based on HTTPS
protocol (both SharePoint and Zenodo fulfil this condition).
# 9 Intellectual Property Rights
## 9.1 IPR in the Consortium Agreement
The Consortium Agreement (CA) was signed by all project partners and has come
into force on the date of its signature by the Parties and shall continue in
full force and effect until complete fulfillment of all obligations undertaken
by the Parties under the EC-GA and the Consortium Agreement. The purpose of
the Consortium Agreement (CA) is to establish a legal framework for the
project in order to provide clear regulations for issues within the consortium
related to Results and Intellectual Property (IP), Ownership, Confidential
Information, Open Source issues, Standard contributions, and Access Rights to
Background and Foreground along the duration of the project and any other
matters of the consortium’s interest.
The CA also covers full rights and responsibilities of participants during the
project in respect of the confidentiality of information disclosed by the
partners, as well as the publication and communication of information.
Moreover, the CA provides additional rules to ensure effective dissemination
of the results. Settlements of internal disputes and of course Intellectual
Property (IP) arrangements are part of the CA as well.
Any knowledge, information, data and/or IPR generated before the effective
date of the CA (i.e. background) shall remain with the respective party
providing such background to the project. Any result generated by a party
after the said date, during and within the scope of the project (i.e. Result),
whether or not it qualifies for Intellectual Property Right (IPR) protection,
shall vest in the party that generated such Result. Any jointly generated
Result will be jointly owned. The rights and obligations associated to such
jointly generated Result will be regulated in the CA, but in any event each
joint owner contributing to the cost of such jointly generated Result shall
enjoy an unrestricted right to freely use and exploit such jointly generated
Result. Throughout the execution of the project, all partners will
continuously contribute to the identification of Results that may qualify for
IPR protection and will act with the aim of achieving a meaningful outcome for
the community following completion of the project.
In case certain results are identified to be essential for the future business
opportunities of the involved partners, the necessary steps will be taken to
protect such results accordingly. The patenting and other protective measure
procedures will proceed along the regulations set forth in the CA.
The IP terms and conditions during the cooperation of SIA will be, a priori,
based on a royalty free basis. After completion of the project (i.e. during
exploitation) access rights to background and to Results may require fair and
reasonable compensation with non-discriminatory conditions, subject to
agreement amongst the parties and reflected in the CA.
All access rights needed for the execution of the project and following
completion of the project will be granted on a non-exclusive basis, will be
worldwide and in principle, will not contain the right to grant sub-
license(s). The CA will further regulate rights and obligations for affiliated
entities of a party, where those shall enjoy the same access rights conditions
as the party participating in the project, and where such affiliated entities
will need to grant requested access rights to other parties if those are
needed during execution and/or following completion of SIA.
The CA also provides additional rules on the introduction, pursuant to
notification, of background that has been made available under controlled
license terms, e.g. so-called open source licenses. To the extent required for
proper use of software results, sub-licensing rights on software results are
regulated by the CA if it is in the best interest of the project
dissemination, where such sublicensing rights shall not be in a manner where
the so licensed software results would be subject to controlled license terms.
Means to make software results available to the other parties or to the public
are part of the CA.
## 9.2 IPR Models
Three possible models have been considered during the proposal preparation
phase and will be estimated during the project execution phases. The current
consensus of the consortium is the retention of title, ownership and
exploitation rights of the results and IPR generated on an individual per
partner basis as the preferred option, although other no binding options will
be explored.
Results - Foreground and IP shall be owned by the project partner carrying out
the work leading to such Foreground and IP. If any Foreground and IP is
created jointly by at least two project partners and it is not possible to
distinguish between the contributions of each of the project partners, such
work will be jointly owned by the contributing project partners. The same
shall apply if, in the course of carrying out work on the project, an
invention is made having two or more contributing parties contributing to it,
and it is not possible to separate the individual contributions. Such joint
inventions and all related patent applications shall be jointly owned by the
contributing parties. Alternative models to be explored, on the basis of
unanimous agreement, are:
* _Joint Ownership and Exploitation_ : The SIA partners will register title and jointly share the exploitation rights of the project foreground based on the relative share of Person Month effort dedicated to the project.
* _Per Work Package or Tasks Ownership and Exploitation_ : In this case, each partner or sub group of partners involved in individual tasks will register title and exploit the results of their own Work Package.
* _Combined Approach to Title and Exploitation:_ In this case the SIA partners will register title in the foreground and IPR in line with their own Foreground and IPR policy.
## 9.3 IPR Officer and the Auditing and Management of Generated Data,
Results, and IPR
The SIA proposal has appointed a specialist on IPR. The role of the IPR
Officer (IPO) will be to act as an honest broker with the Research and
Management staff of the project and provide an objective audit and reporting
on the title and ownership of the IPR generated during the project. This will
be done via periodic surveys and reports based on the content of deliverables
and partners assigned to the associated tasks. The IPO nominated by the
project will conduct Interim IPR Audit will identify the Results generated by
the project, its dependencies on and External IP, or Background knowledge, and
recommend actions to be taken by the consortium for its protection. CEIT has
allocated a specific budget to appoint the IPO. The results of the Interim IPR
Audit carried out will be reported internally and the results of the Final IPR
Audit carried out at M36 will be reported in the corresponding deliverable.
The IPR Officer is Dr. Isabel Hernando, Professor of Civil Law at University
of the Basque Country and IP Lawyer.
## 9.4 Final IP and Generated Foreground Control Report
The Final Report will summarize the results - foreground generated by each
partner during the SIA Project. During the periodic Reports all partners will
be requested by the Intellectual Property Officer to identify:
* The **access rights** granted to another partner of the consortium of their background, which was needed for the implementation of the project.
* The **access rights** granted to another partner of the consortium of their foreground, which was needed for the implementation of the project.
* The **background** used in the implementation of the project.
* The **foreground** generated in the project.
* The **Party exploitable foreground** generated in the project where are identified the Consortium **Single Products** (SP) per Parties and their Contributions – components.
Moreover, the partners will also identify in these Interim Reports the
commercial and open source software as well as the hardware used in the
implementation of the project. The Intellectual Property Officer will review
all this information and will provide advice about IPR when needed to the
consortium. All this information will be requested using the “Result and IP
Control Report Template” provided by the IPO and the Coordinator.
Once all the Foreground and IP information has been gathered from the
partners, the IPO will carry out an objective audit and will report on the
title and ownership of the IPR generated during the project.
Tables will be used to clearly identify the main outcomes of the IPR analysis.
Among the foreground generated in SIA during the Project, a **Table 1** will
identify those that are exploitable foreground of the consortium classified in
five groups (1) for Further Research; (2) for developing, creating and
marketing a product/process; (3) for creating and providing a service; (4) for
Standardization activities; and (5) For others (as Joint ventures, Spin-off,
licensing, etc.). A **Table 2** will provide the possible and recommended IPR
qualifications for the identified SIA Exploitable Foreground. This Table will
present the list of IPR registration applied or recommended **. A Table 3**
will identify the Ownership percentages to the components – contributions to
the Single Products and the Intermediate Single products developed in the SIA
Project and finally, for the Exploitable Foreground, a **Table 4** will
identify the Parties ownership percentages to the Final Project Single
products.
## 9.5 Exploitation and Ownership Agreement
Title and ownership of results is considered to be, from a legal perspective,
a matter of undisputable fact. The IPR audits will be presented by the IPO to
each partner as a proposal of ownership of results according to data provided
by the Partners and derived from the control of technical outputs and
deliverables for revision and acceptance by their organizations.
At the end of the SIA project, these audits will form the basis for a
potential Exploitation and Ownership Agreement (EOA) if any. This agreement
will not be a formal project deliverable and is a private contract between
partners in the same level as the Consortium Agreement. The Exploitation and
Ownership Agreement will be a binding legal contract between the partners
which will be negotiated and approved by authorized representatives within the
partners’ organizations.
# 10 Conclusions
This document presents the internal guidelines that will be followed for the
appropriate data and privacy management of the SIA project. Some of the
sections in this document will be updated throughout the lifetime of the
project, as previously indicated, in order to appropriately address the
practical requirements of the project. The overall data and privacy management
plan of the project described in this deliverable is aligned with the
information already provided in the Description of Action for SIA (as per
Grant Agreement number 776402).
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1401_CIRC4Life_776503.md
|
1. **Introduction**
CIRC4Life project complies the FAIR data management concept to develop this
DMP. FAIR data management requires the project data should be 'FAIR', that is
findable, accessible, interoperable and re-usable. These principles precede
implementation choices and do not necessarily suggest any specific technology,
standard, or implementation-solution.
Deliverable 10.3 is not intended as a strict technical implementation of the
FAIR principles, it is inspired by FAIR as a general concept. The following
documents have been referred in order to develop the CIRC4Life DMP:
* Guidelines on FAIR Data Management in Horizon 2020 (EC, no date)
* FAIR data principles (FORCE11, no date)
* FAIR principles (Wilkinson _et al._ , 2016)
The FAIR DMP template (EC, no date) is a set of questions that should be
answered with a level of detail appropriate to the project. The DMP is
intended to be a living document in which information can be made available on
a finer level of granularity through updates as the implementation of the
project progresses and when significant changes occur. Therefore, DMPs should
have a clear version number and include a timetable for updates. As a minimum,
the DMP should be updated in the context of the periodic evaluation/assessment
of the project. If there are no other periodic reviews envisaged within the
grant agreement, an update needs to be made in time for the final review at
the latest.
2. **Data Summary**
The CIRC4Life project involves an ICT (Information and Communications
Technology) solution using a variety of technologies, which are important to
provide the means to implement and assess the effectiveness of the Circular
Economy Business Model (CEBM). These technologies will drive the data
generation and collection processes and help inform the testing against the
proposed methodologies of using an eco-point approach, sustainable product
design and production, real-time tracking and monitoring technology based on
EPCIS (Electronic Product Code Information Services), information logistics
sharing infrastructure across the supply chain and associated data security
and privacy.
Given that the ICT solution will use a number of tracking technologies,
potentially including barcode, and EPCIS. The project process tasks will
include full documentation on barcode code types, and EPCIS; and with regards
the latter the EPC (Electronic Product Code) Core Business Vocabulary
identifiers used. The documented descriptions will also contain an explanation
on why particular identifiers or models were used for all the tracking
technologies used, and how they related to the intended outcomes.
Example datasets with these descriptions will be supplied in an XML as well as
Microsoft Word DOCX format for the descriptions, and example tracking data
datasets are in CSV and Microsoft Excel XLSX format that will be compatible
with as many software applications as possible.
Subsequent research will benefit from analysing the chosen identifiers used in
attempting to improve the improved sustainability performance model. A clear
description of the identifiers used, as well as the accurate documentation and
example datasets of tracking technologies used, would be critical to this type
of analysis.
With regards anticipating the expected size of data to be produced, the nature
of the CIRC4Life project involving a number of collaborators makes an accurate
guess difficult at this early stage, but it is proposed that the data
generated will be in the order of Gigabytes (approx. 100), rather than
Terabytes.
5
The importance of making these datasets FAIR (Findable, Accessible, Inoperable
and Reusable) is recognised and planned for, but taking into account the
demands of the ‘living’ nature of a data management plan as the project
progresses, the final form of the datasets cannot yet be declared in detail,
but the Project Coordinator (PC) can offer an outline of the data that has
been collected.
To date, research activity has primarily focussed on intelligence gathering
and project scoping. There now exists a subset of project data associated with
the development of models, techniques and processes. These have been reported
in the following project deliverables already submitted to the EU portal (See
Table 2.1). Each of these deliverables and their associated data are
contributing to the further project activities of the relevant WPs.
The project is also collecting data directly from consumers via online surveys
and Living Labs in this reporting period. For example, consumer attitudes to
reuse & recycling LED lighting products and the eco-points System have been
measured using Survey Monkey, an online survey tool. A complete overview of
the data collected, and an analysis of these results is reported in
Deliverable 3.4 - Report on consumer satisfaction survey (M18).
Publicly available information about the project and some data are made
available via the CIRC4Life Project
Website, available at: _https://www.circ4life.eu/_
The CIRC4Life SharePoint Site stores all project data for the duration of the
grant (see Section 5.1). Data is grouped and organised under each WP.
Therefore, this site gives the Project Co-ordinator complete oversight of all
data created by CIRC4Life is. The following table gives a summary of the data
that has been collected to date; and, their relation to the WPs and
corresponding deliverables. These datasets will be utilised as each WP
progresses to inform the development and/or creation of future project
outputs.
_A circular economy approach for lifecyles of products and services_
### Table 2.1 Overview of CIRC4Life collected data in this reporting period
(M6-M18)
<table>
<tr>
<th>
**Tasks/Tools/Activities related to data**
</th>
<th>
Format
</th>
<th>
**Work Package**
</th>
<th>
**Related deliverables in this reporting period (M6-M18)**
</th>
<th>
**Responsible Partner(s)**
</th> </tr>
<tr>
<td>
Recycle bin cards, User Profiles
</td>
<td>
JSON
</td>
<td>
WP2
</td>
<td>
D3.1 - Development of eco-shopping and eco-account tools
</td>
<td>
NTU, EECC, ICCS
</td> </tr>
<tr>
<td>
Recycle bin traceability data
</td>
<td>
EPCIS, JSON
</td>
<td>
WP2
</td>
<td>
D2.3 - Development of the ICT system for reuse/ recycling
</td>
<td>
NTU, EECC, ICCS
</td> </tr>
<tr>
<td>
Data entry Tool
</td>
<td>
JSON
</td>
<td>
WP4
</td>
<td>
D4.3 - Report on development of adaptor systems for eco-point, LCA and EPCIS
event data interoperability
</td>
<td>
ENV, ICCS
</td> </tr>
<tr>
<td>
CIRC4Life Eco Account
</td>
<td>
JSON
</td>
<td>
WP4
</td>
<td>
D3.1 - Development of eco-shopping and eco-account tools
</td>
<td>
NTU, ICCS
</td> </tr>
<tr>
<td>
CIRC4Life Master Product Data
</td>
<td>
JSON-LD
</td>
<td>
WP4
</td>
<td>
D4.3 - Report on development of adaptor systems for eco-point, LCA and EPCIS
event data interoperability
</td>
<td>
ENV, ICCS
</td> </tr>
<tr>
<td>
CIRC4Life Live Product Data
</td>
<td>
JSON-LD
</td>
<td>
WP5
</td>
<td>
D5.2 - Development report and documentation for traceability components and
tools
</td>
<td>
EECC, ICCS
</td> </tr>
<tr>
<td>
LCA Module
</td>
<td>
.xlsx
</td>
<td>
WP1
</td>
<td>
D1.2 - Report on sustainable (environmental, social and economic) impact
analysis
</td>
<td>
NTU, JS, KOS, ONA, ALIA
</td> </tr>
<tr>
<td>
Brokerage Tool
</td>
<td>
XML
</td>
<td>
WP4
</td>
<td>
D7.3 - Report on the stakeholder involvement along the supply
</td>
<td>
GS1, ICCS
</td> </tr>
<tr>
<td>
Workshop data from Living Labs 1 **2**
</td>
<td>
.doc
</td>
<td>
WP7
</td>
<td>
D7.4 - Experience and recommendations of end-user engagement across circular
business model development
</td>
<td>
LAU
</td> </tr>
<tr>
<td>
Stakeholder contact list
</td>
<td>
.csv
</td>
<td>
WP9
</td>
<td>
D9.1 - Communication plan
</td>
<td>
MMM, NTU
</td> </tr>
<tr>
<td>
Consumer surveys
</td>
<td>
.csv
</td>
<td>
WP3
</td>
<td>
D3.4 - Report on consumer satisfaction survey
</td>
<td>
MMM
</td> </tr> </table>
# FAIR Data
## Making Data Findable, Including Provisions for Metadata
Through the active phase of the project, data management will include simple
organisational measures such as following a file naming convention (FNC) and
will take the form of CIRC4Life_ DocumentName_ ResponsiblePartner _ YYYYMMDD_
Version.docx. Document filenames will be kept short to avoid unnecessarily
long paths, always include the last person to edit the document, and a version
indicator.
For example, CIRC4Life_ProjectManagementPlan_NTU_20181009_V0.1.docx
All project documentation will be stored in a Microsoft project SharePoint
site (as part of an Office 365 for
Business cloud service instance) that will enable full control over editing
permissions of project participators. SharePoint as a collaboration and
document management platform also offers good functionality for platform-wide
metadata control of content.
For additional knowledge management, an Excel file will be included and
promoted that contains a list of keywords, and the accepted definitions to be
used by all the participating project partners. It will also act as a
controlled vocabulary to ensure consistent knowledge organisation that will
aid subsequent retrieval and reuse. This document will be easily accessible
for project collaborators using the project SharePoint site and included on
the project website.
Project data will be made discoverable through the inclusion of a detailed
descriptive record that will be added to NTU’s institutional repository. This
service is indexed in Google and Google Scholar; therefore, the records will
be retrieved when anybody searches for the keywords associated with the
CIRC4Life project. Further information about this is provided in the section
below. A data availability statement will also be included in the project’s
published outputs to direct readers to a full overview of project data, as
well as the terms and conditions of accessing and using publicly available
data. This method will help increase the visibility of the data and make it
easier for people to locate and access them.
## Making Data Openly Accessible
The final datasets to support published outcomes will be deposited in Arkivum,
the NTU instance of the
Archive-as-a-Service data safeguarding and long-term lifecycle management
solution. A Digital Object
Identifier (DOI) will be minted using DataCite service, a global provider of
DOIs for research data, and included in the description (with additional
metadata) in the NTU IR to facilitate persistent identification and discovery.
The metadata included in the IR record will identify the file formats the
datasets are available in, which will indicate to potential users the software
application required to effectively use the datasets. Any software code will
also be described in the IR and archived, but intellectual property (IP)
implications may restrict access. The development of the ICT solution will be
accompanied by software documentation that will provide instructions for
reuse, and these will be archived and made available with the proviso again of
potential IP implications.
NTU uses DataCite Metadata Schema to describe the data and is actively updated
and promoted by interactively coordinating with community standards, like
ORCID ( _https://schema.datacite.org/_ ). Access to datasets will be open and
currently available as a request/mediated service via the Library Research
Team at NTU, which will necessarily identify the individual requiring access
to the datasets and provide statistical information on dataset usage. However,
availability will always be determined by licence conditions, and that will
allow for some granularity of access (including restrictions) to be
potentially specified by different project collaborators. DataCite also has a
reciprocal dataset registry service to aid discoverability and potential reuse
of datasets that have been determined to be openly available.
The data will be preserved for a minimum of ten years in the Arkivum service,
or in any other subsequent solution used.
Access to project documentation and data is only available to those who have
access to the SharePoint site, which is determined by the PC and implemented
by NTU research staff participating in the project. Additional backup and data
restore procedures will be agreed using this solution full functionality. A
further backup will be performed daily to an on-premise storage solution
hosted by NTU.
As stated NTU has a long-term storage solution (i.e. Arkivum), available to
enable preservation and curation. It is an Archiving-as-a-Service solution
that combines a hybrid local and cloud storage synchronised combination that
guarantees data integrity and redundancy with full security. This ensures that
funder, institutional or publisher retention compliance is satisfied, as is
the authenticity of the original data for open data requirements or post
research review if necessary.
## Making Data Interoperable
NTU uses DataCite metadata schema as DataCite looks to community practices
that provide data citation guidance. The PC will foresee that SharePoint
project site will include appropriate mandatory DataCite metadata elements for
project files. Such as (Creator, Title, Description, Access to the dataset,
Data Collection Method, Data Processing/Preparation Activities), and are
included as mandatory metadata classification fields for file inclusion on the
SharePoint site.
Using standard vocabularies for all data types is not initially anticipated as
the data definition is integral to the tracking technologies of barcodes and
the EPCIS Standard already. It will be clearly stated what barcodes code type
used, and EPCIS Standard used.
However, the PC will ensure that any datasets that are included as CSV will
have a description definition for all the data elements included, and these
are mapped to the industry standard description of the identifiers used for
the different tracking technologies.
## Increase Data Re-use (Through Clarifying Licences)
All openly available data will be offered on a CC-BY SA share-alike basis, so
attribution will be required and that any repurpose, or re-use will be shared
on the same basis. IP requirements may determine that some datasets are not
open and will remain closed but archived in Arkivum. The description of the
individual datasets in the NTU IR will make the conditions of the licence
clear.
Archival in Arkivum is for ten years, and will always be available, allowing
for licence conditions, for external viewing during that period.
During the active phase of the project, data quality assurance process will be
organised by the PC (responsible individual for data management) to discover
inconsistencies and other anomalies in the data, as well as performing data
cleansing activities (e.g. removing outliers, missing data interpolation) to
improve the data quality. It is foreseen that this will involve sampling
datasets initially, with a thorough assessment of particular grouped datasets
if significant inconsistencies are discovered.
# Allocation of Resources
The costs of using Arkivum Archiving-as-a-Service solution available at NTU at
the end of the project is £0.50 per GB of archived data. As the projected
final amount of data is expected to be in the order of 100 (estimation)
Gigabytes, this will cost approximately £50 per year, or £500 for ten years.
NTU will support the archiving of funded research datasets to promote the open
access and data agenda.
The responsibility of data management on the CIRC4Life project is the PC of
the leading collaborating institution (NTU). Everyday workflow tasks will be
delegated, but the PC will ensure that consistent data management is performed
for the duration of the project and will conduct six monthly reviews on the
use of controlled vocabulary, file naming and versioning conventions and that
the organisational logic of the SharePoint site is adequate.
The PC will be responsible for selecting the datasets to be archived, for a
period not less than ten years. Preservation will support publication
outcomes, research deemed of long-term value, as well as project communication
channels dissemination output and literature. Should personal details be
included in the data preserved, then anonymity need to be maintained, but
traceable should there be a need for source data verification.
Any data deemed to be not worth saving in the active and archival stage needs
to be destroyed in accordance with NTU Information Systems data destruction
policy.
# Data Security
As described in Section 3.2, the long-term security and preservation of
project data will be managed by NTU using the Arkivum appliance, known
publicly as The NTU Data Archive. This section details the security
arrangements for project data during the grant period. Data will be moved from
the Circ4Life SharePoint site to Arkivum at the end of the project, or before
if data is released simultaneously with scientific publications.
## Data Security Policies in CIRC4Life ICT Platform
All CIRC4Life data will be collected, stored, protected, shared, retained and
destroyed, upholding state-of-theart security measures and in full compliance
relevant EU legislation, bearing in mind the demands of crowdsourcing and
flood modelling contexts. As a general rule, this data will be stored on paper
and/or computer files protected by state-of-the-art physical and logical
security measures: the archives containing the paper folders are locked; the
computer files are stored in computers and hard disks, accessible only by
authorized personnel (within the relevant CIRC4LIfe partners) through
password. This data will not be in any case shared with or disclosed to anyone
outside the research team until data has been finalised for publication and
approved for release by the Consortium.
A data minimisation policy is adopted by CIRC4LIfe, which means that only data
strictly necessary for running the participatory and demonstration activities
is collected and processed. Personal data, if any, collected and stored within
CIRC4Life and for the purpose of the project aforementioned activities will be
permanently and irrevocably erased on the project completion. Nevertheless,
only if an individual participant has provided his/her free, specific and
informed consent, name, age, professional occupation and professional views
will this data be included in project outputs. If such a consent is not
provided by the individual participant, only information that may be processed
in a way that inhibits tracing his/her opinions back to him/her (anonymised
information) will be a part of the activities.
In particular and with respect to access control and data protection,
CIRC4LIfe data (including data from all the input sources) will be collected
by the CIRC4Life ICT platform (hosted in ICCS premises) which will store them
in the relevant databases. This platform and the associated data repository
system provides the means for deploying access control policies as described
in Task 4.3. These are flexible and fine-grained means to assign permissions
to roles and users in such a manner that access to resources can be controlled
sufficiently for all eventualities. Moreover, the system will provide secure
access to data repository by using security protocols such as OAuth2 for
authorization and secure encrypted HTTPS calls. More specifically an open
source software, KeyCloak, is used for the access to the ICT platform and the
authentication for all the applications.
## CIRC4Life End Users Personal Data
CIRC4Life ICT platform data will be stored in the ICCS data repositories.
These are secure servers with limited access to Internet (with established
filtering rules allowing access only for specific web requests or for
authorized personnel through a VPN network). Security provisions are also
taken for the physical infrastructure (rooms) where these servers reside.
Furthermore, a specific encryption with the usage of KeyCloak will provide an
additional level of security to the personal data.
## Data Anonymization
Data anonymization refers to the processing of personal data in such a manner
that the personal data can no longer be attributed to a specific data subject
without the use of additional information, provided that such additional
information is kept separately and is subject to technical and organisational
measures to ensure that the personal data are not attributed to an identified
or identifiable natural person.
However, the explicit introduction of anonymization is not sufficient by
itself to preclude any other measures of data protection. Therefore, security
policies for data protections should always be enforced as strictly as
possible. The controllers of the CIRCC4Life ICT Platform should require that
data anonymization is enforced before any dataset is uploaded to the ICT
Platform.
# Ethical Aspects
Any personal data gathering within the CIRC4Life project will conform to
informed consent expectations that are expected with regards current Data
Protection legislation, and the EU General Data Protection Regulation (GDPR)
that started to implement on 25 th May 2018\.
The project team has developed the CIRC4Life Ethical Clearance Checklist. This
must be completed by each team whose responsible task involve with ethic
issues. This checklist addresses aspects associated with data use and data
retention. It requests confirmation that the team is familiar with GDPR and
that the activity has been designed through close consideration of the issues
surrounding Data Protection. The archive of signed checklists from each team
(see Table 6.1) are placed in the CIRC4Life SharePoint Site to provide
administrative oversight to the project management of all ethical aspects of
RDM. All participant information sheets explain how data will be used during
the project. This is provided in language of the participants and is written
in clear and straightforward language. The CIRC4Life Survey Privacy Notice is
an example of this. These are available here:
_https://www.circ4life.eu/survey-privacy-policy_ .
An ‘Ethics requirements’ work package has been developed, as Deliverable
D11.2, D11.3 and D11.7 of Work Package 11 has been submitted in this reporting
period, in order to addresses the ethics requirements of the CIRC4Life
project.
_A circular economy approach for lifecyles of products and services_
**Table 6.1 Archive of CIRC4Life Ethical Clearance Checklist in this reporting
period (M6-M18)**
<table>
<tr>
<th>
**WP/Task name**
</th>
<th>
**Signed by**
</th>
<th>
**Signed time**
</th>
<th>
**Brief description on the activities and objectives**
</th> </tr>
<tr>
<td>
Task 7.2 - Implementation living labs (M10-M30)
</td>
<td>
KOS, LAU
</td>
<td>
23/05/2019
</td>
<td>
Leasing service workshop in Lighting Industry Association (LIA), Telford, UK
on 28 May 2019.
</td> </tr>
<tr>
<td>
Task 7.2 - Implementation living labs (M10-M30)
</td>
<td>
ALIA
</td>
<td>
29/03/2019
</td>
<td>
To organize at least three workshops in relation to Task 7.2. Living Lab
Implementation Activities, in order to plan DEMO4. Each workshop will last
around 2 h 30 minutes and will involve both, citizenship and public
administration.
</td> </tr>
<tr>
<td>
Task 8.4 - Policy alignment (M2M33)
</td>
<td>
CEPS
</td>
<td>
03/04/2019
</td>
<td>
The interviews for the first sub-task (analysis of policies and regulations)
were conducted between October and
December 2018. The team collected information through interviews with
companies involved in the demonstration of the project's circular business
models but also with other companies involved in the electronics and food
value chains in order to collect additional perspectives and information.
</td> </tr>
<tr>
<td>
Task 2.6 - End-user awareness for reuse/recycling (M10-M16)
</td>
<td>
IND
</td>
<td>
27/03/2019
</td>
<td>
Three different primary schools were selected in order to carry out a training
process with students and professors, focused on end-users awareness for
recycling and reuse. During this training, besides the educative process
itself, the intelligent container will be placed in the school in order to
give the opportunity to the students to dispose their WEEE and put in practice
the acquired knowledge. The participants are the students and the professors.
Neither photos nor videos will be taken without authorization and under no
circumstances personal data will be collected or used.
</td> </tr>
<tr>
<td>
WP7 - Stakeholder Interaction and End-user Involvement (M1-M30)
</td>
<td>
LAU
</td>
<td>
19/10/2018
</td>
<td>
First Innovation Camp event 2018 included 70 participants, out of which 40
external participants. External participants were selected based on the
procedures and selection criteria described in D11.1 section 2.2. Participants
were classified based on the Quadruple Helix type (academia, business
consumer, policy) and selected to ensure transparent and fair participation of
QH types, as well as gender and geographical balance.
</td> </tr>
<tr>
<td>
Task 3.6 - Consumer (satisfaction) survey (M13-M18)
</td>
<td>
MMM
</td>
<td>
08/07/2019
</td>
<td>
Conduct online surveys using Survey Monkey in order to easily process and
analyse the data; Complement these surveys which tend to attract only certain
type of consumer with a medium/high education with physical interviews using
questionnaire in key spots such as recycling/collecting points, zero
waste/organic shops, and trade fairs in several countries.
</td> </tr>
<tr>
<td>
Task 6.3. - Demonstration of CEBM with tablets (M19-M33)
</td>
<td>
REC
</td>
<td>
07/05/2019
</td>
<td>
A pilot test will be conducted in the Basque Country (Spain), specifically in
Getxo municipality. For that, Getxo's
inhabitants will employ the APP developed in the project and the intelligent
containers placed in Getxo for disposing their devices. The participants of
this test pilot will do it voluntarily and their number is unknown. Any
personal data will be linked to the APP and will comply the GDPR.
</td> </tr> </table>
<D10.3: Data Management Plan - #2> 12
_A circular economy approach for lifecyles of products and services_
# Outlook and Conclusions
With regards, conducting research project NTU expects staff and students
involved in research to adhere to the policies pertaining to ‘Code of Practice
for Research’, ‘Research Ethics Policy’ and the ‘Research Data Management
Policy’. NTU has also sought to follow the requirements and recommendation of
Horizon 2020 EU funding as described in this document. Lastly, as a UK HE
organisation, NTU has provided support for research active members to be aware
of national guidelines on open data, and to follow the principles wherever
possible.
The DMP will be the guide document for project’s data treatment and
management. As has been seen, the DMP describes which and how data is
collected, processed or generated, but also outlining the methodology and
standards used. Furthermore, the DMP explains whether and how this data is
shared and/or made open, and how it is curated and preserved. Finally, it
should be taken into account that the DMP evolves during the lifespan of the
project. Thus, this second version will be updated in M36 to reflect the
status of the CIRC4Life project with respect to its data management.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1404_ReCiPSS_776577.md
|
**1\. Executive Summary**
The project ReCiPSS is dedicated to follow the open access policy as part of
the European Horizon 2020 program. For this purpose, all publications like
articles, conference contributions and research data related to the project’s
outcomes that do not fall under confidentiality must be deposited in an open
access repository. To achieve widespread dissemination, the results will be
available on the ReCiPSS-website, linked to the open repositories _OpenAIRE_
and _Zenodo_ and distributed at scientific platforms linked via the individual
partners’ _ORCID_ IDs.
This document summarizes the guidelines for publishing research data that
result from the project as well as the necessary previous steps that need to
be undertaken by each partner to be able to do so. The document differentiates
between official documents and data on the one hand, and the representation of
this data and the work leading to the results in different scientific social
media and repositories on the other hand.
In ReCiPSS, research data will be stored on the _Zenodo_ platform, which is
integrated with _ORCID_ and easily usable with _OpenAIRE_ . _Zenodo_ is funded
by the _European Commission_ and enables the public upload of research with
open accessibility.
Reports, Journal publications, conference contributions and other written
documents need to be published with open accessibility as well. To ease the
accessibility after publishing under a suitable license, documents may be
uploaded via the institutional repositories of the ReCiPSS partners, and
linked to the _OpenAIRE_ representation of the project, either from the
publishers’ open access repository or from the institutional repository. In
_OpenAIRE_ ; both research data and written documents are centrally collected
and linked among each other as well as to the responsible researchers.
To make sure the extent of information and collection of publications is
consistent through different platforms such as _Mendeley_ , _MyScienceWork_ or
_ResearchGate_ , project partners create and use their ORCID ID (central,
independent identifier) to store, export and import information on their
scientific work, especially in the context of the project ReCiPSS.
2. **Introduction**
This document is a result from the task on managing open data and open access
publications in ReCiPSS (Task 8.4), as it lays out the plan for open data
management including the two main types of data that are:
1. Written documents, such as reports, journal articles, conference contributions etc.
2. Research data that underlie the written documents and deliverables that result from the project, independent of a specific publication.
In defining a process for handling those two kinds of outcomes, the project
consortium seeks to fulfil the demand for publicly accessible and (re)usable
publication of results.
1. **Document Scope**
The role of this document is to document the planned proceeding for
publication of different kinds of data and documents towards the European
Commission and the wider public as well as to be a guide for the project
partners. Proceedings proposed and repositories defined in this document may
be adjusted or changed in the course of the project due to new developments in
the externally given possibilities or internal learnings.
The role of this document is not to make sure data is allowed to be published,
no confidentiality clause are violated or the documents to be published fit
the quality standards of the project. This document assumes that all relevant
consortium-internal coordination and agreements have taken place and all
necessary conditions for the publication of data or repots are met.
2. **Methodology**
To create this document, different options for hosting both research data
(such as spreadsheets or databases) and written documents were analysed for
their possibilities and the integration with each other as well as the
compatibility with the chosen central hub for relevant information on the
project: _OpenAIRE_ . With the possibilities at hand, use cases were developed
for different kinds of outcomes and usages of platforms to condense these use
cases into a process flow chart.
3. **Document Structure**
This document is structured into three parts:
1. A presentation of the use cases and the resulting requirements towards an open data management process.
2. A presentation of the external services used.
3. Process flowcharts that guide the usage of the services and fulfil the initially formulated requirements.
4. An outlook on next steps and possible adjustments in the development of the open data management plan.
3. **Use cases and requirements**
The following sections will describe different use cases that require an open
data management plan. The use cases serve the purpose of deriving the
necessary means for the open data management in the project ReCiPSS, while the
description raises no claim to completeness. Additional use of the services
described are possible.
1. **Use case descriptions**
## 3.1.1. Publication of a deliverable
The project partners have worked on a deliverable for a specific work package.
This deliverable is finalized and submitted to the EC already. Now they want
to publish this deliverable, as it is categorized with the dissemination level
“public”. They need to have a clear proceeding of where the document should be
published and how the public access can be ensured. 1
## 3.1.2. Publication of a conference contribution or journal article
A project partner has worked on a deliverable that is already published
through the project’s channels. She now wants to publish the methodology she
developed for creating the content of this deliverable as an independent
publication in a scientific journal or a conference. 2 The open data
management plan needs to guide the publication with the correct “cc” license.
Furthermore, a way needs to be defined that links the publication to the
project and other publications resulting from it.
## 3.1.3. Publication of research data (spreadsheet) or additional documents
like PowerPoint slides
A project partner published a deliverable report as well as a journal article
on developments in ReCiPSS. The published written documents are based on an
extensive set of data that were analysed and that need to be known in order to
be able to recreate the scientific process. The algorithm for the data
analysis was also developed, while it was too long to be included in the
publication itself. The partner created a set of PowerPoint slides as a guide
to use this data and understand the algorithm. The project partner needs to
have a platform that the research data (the spreadsheet with the analysed
data, the source code of the algorithm and the PowerPoint slides) can be
uploaded to under a “cc” licence and that can be connected to both the journal
article and the deliverable report, compatible with _OpenAIRE_ . 3
## 3.1.4. Scientific exchange via a platform
A project partner published both a deliverable report and a journal article
with the respective data underlying the developments. Now, they want to add
this information to their _ResearchGate_ group so that their peers and
interest groups are informed on the developments and where to find them. Apart
from that, they appreciate the open discussion on such platforms that result
from publishing there. They need to have a way of how to deal with such
external, private researcher accounts that makes sure the latest data is
always visible on all such platforms as well as on _OpenAIRE_ . They should
furthermore be connected, so that their peers can look up other publications
from other partners of the ReCiPSS project.
## 3.1.5. External search for information about the project
A project partner’s colleague heard about ReCiPSS and wants to know what it is
about and if there are any current developments. Therefore, they look up the
project on _Google_ and find the ReCiPSS website with a description on the
welcome-page as well as the deliverables listed. They now want to find out
more relevant information, such as associated publications, other works and
projects of the consortium partners and the data that was worked with. For
them, different representations of the project, such as the _OpenAIRE_
representation for access to all relevant data and the _LinkedIn_ group or the
_ResearchGate_ group for discussions and further work of the involved
researchers, need to be linked to from the project website.
## 3.1.6. Institutional repositories
Towards the end of a year, the librarian of a project partner’s institution
asks for the publications in the concluding year to put together the annual
institution report. Therefore, publications, which are published under a
creative commons open access licence, need to be uploaded to the institutions
repository and categorized with the correct keywords. This upload to the
institutional repository should also be linked to the _OpenAIRE_
representation of the ReCiPSS project.
**3.2. Requirements**
Requirements that result from the analysis of the described use cases are
listed in the following Table 1.
## Table 1 Requirements towards the open data management plan
<table>
<tr>
<th>
_N o _
</th>
<th>
_Requirement_
</th> </tr>
<tr>
<td>
**1**
</td>
<td>
Publicly accessible publication repository for public deliverables
</td> </tr>
<tr>
<td>
**2**
</td>
<td>
Publication guideline including licence for open access
</td> </tr>
<tr>
<td>
**3**
</td>
<td>
Link publications from and on external providers to _OpenAIRE_
</td> </tr>
<tr>
<td>
**4**
</td>
<td>
Describe where to upload research data, such as spreadsheets or source code
</td> </tr>
<tr>
<td>
**5**
</td>
<td>
Link research data to _OpenAIRE_
</td> </tr>
<tr>
<td>
**6**
</td>
<td>
Support the scientific exchange on external platforms such as R _esearchGate_
</td> </tr>
<tr>
<td>
**7**
</td>
<td>
Enable data export and import to ease actuality of data on different platforms
via a central source of information
</td> </tr>
<tr>
<td>
**8**
</td>
<td>
Use and connect institutional repositories for written documents such as
conference contributions or journal articles
</td> </tr>
<tr>
<td>
**9**
</td>
<td>
Support the website as a central information hub on different repositories and
publications
</td> </tr> </table>
4. **External services used**
This chapter lists external services, such as research networks, hosting
services and repositories that are used to fulfil the requirements described
in 3.2.
1. **OpenAIRE**
_OpenAIRE_ is an acronym standing for “Open access infrastructure for research
in Europe”. _OpenAIRE_ is an EU organisation that provides an open repository
to link all research results from EU funded projects. It is part of the
Horizon 2020 program ( _https://www.openaire.eu/_ ) . To use this repository
and be compliant with the necessary open publication requirements, partners
should make sure a journal’s or conference’s publication licencing is
compatible with the _OpenAIRE_ literature guidelines. All relevant information
for researchers are given at _https://www.openaire.eu/guides_ . The link to
the ReCiPSS-project in _OpenAIRE_ can be found at _http://s.fhg.de/recipss_ .
2. **Zenodo**
_Zenodo_ is a central research data repository funded by the _European
Commission_ (EC), _OpenAire_ and _CERN_ to store any types of research data
and datasets up to 50 GB ( _https://zenodo.org/_ ) . Besides the purpose of
being a place to host large datasets, _Zenodo_ also supports the upload of
other scientific publications, presentations, videos or other kinds of data
related to research. Uploaded data is endowed with a DOI and thus citeable and
can be licenced with a cc licence or different models. 4
3. **ORCID**
_ORCID_ is a non-profit organization aiming to connect researchers and their
scientific work. Seeking to make sure that e.g. typos in the name or the
organization or changing names and titles during a scientific career do not
lead to publications appearing to have been done by different researchers. A
persistent digital identifier is used to link publications, projects,
proposals etc. to the researcher instead of the name. On the _ORCID_ platform,
participants can link their publications to their personal
information/profile. With an _ORCID_ profile, they can also sign in and import
their information on other scientific platforms. Using an _ORCID_ ID ensures
consistent information on all other platforms, when data does not need to be
entered manually at each platform, but can be stored in the _ORCID_ profile
and imported or updated.
( _https://orcid.org/_ )
4. **Creative Commons Licence**
_Creative Commons_ (cc) is an American non-profit organization that publishes
different licence contracts for an author to define the right to use of his
output to the public. The organization has released several copyright-licenses
known as _Creative Commons_ licenses free of charge to the public to use in
cases as described in the previous sections. These licenses allow creators to
communicate which rights they reserve, and which rights they waive for the
benefit of recipients or other creators [1]. (
_https://www.creativecommons.org_ ) . Publications via _Elsevier_ are e.g.
open-access published using the creative commons license _CC BY-NC-ND 4.0_
usually. 5
5. **Institutional repositories**
Research organisations and universities usually have their own institutional
repository to store research data in (apart from open repositories such as
_Zenodo_ ). For _TU Delft_ , _Masaryk_ _University_ , _KTH Stockholm_ and
_Fraunhofer_ , the links are provided in Table 2 below. Those repositories
should be used in ReCiPSS to store individual publications e.g. in journals at
and afterwards link them to _OpenAIRE_ .
## Table 2 Institutional repositories
<table>
<tr>
<th>
_Partner_
</th>
<th>
_Column Title_
</th> </tr>
<tr>
<td>
**TU Delft**
</td>
<td>
_https://repository.tudelft.nl/_
</td> </tr>
<tr>
<td>
**Masaryk University**
</td>
<td>
_https://is.muni.cz/repozitar/?lang=en_
</td> </tr>
<tr>
<td>
**KTH**
</td>
<td>
_http://kth.diva-por-tal.org/smash/search.jsf?rvn=1 &dswid=244 _
</td> </tr>
<tr>
<td>
**Fraunhofer**
</td>
<td>
_http://publica.fraunhofer.de/star-web/ep09/index.htm_
</td> </tr> </table>
5. **Process flow chart**
To make the anticipated publication process easy to follow and comply with,
this chapter organizes the individual steps for project partners into flow
charts. The first section will highlight the necessary actions at the start of
the project, before a first publication. The second section will highlight the
process of publishing different kinds of data and written documents and
include steps to be taken regularly throughout the project.
Adding to the tasks of all participants when finishing and publishing
foreground, Figure 1 to Figure 3 also show the role of WP8 partners _KTH_ and
_FHG_ to support the open data management plan as laid out in this document.
**5.1. Steps at the start of the project**
To be able to use the services highlighted in the sections above, research
partners need to use _ORCID_ as an unambiguous identifier for them and their
work. Figure 1 shows the process of creating or updating this _ORCID_ ID as a
necessary preparatory step before publishing.
<table>
<tr>
<th>
<table>
<tr>
<th>
Preparative tasks independent of a specific publication
</th> </tr> </table>
</th>
<th>
**Project participants**
**WP 8**
Do you already have an
ORCID
\-
ID?
No
Yes
Create an ORCID
profile and ID and
add your personal
information and
previous
publications
Update your
existing ORCID
profile if
necessary
Start
Join the ReCiPSS project group at all used
platforms or connect uploaded data to the
ReCiPSS project group
If you prefer
additional platforms or
communities connect your
profile with
your
ORCID
\-
ID, where possible.
Create ReCiPSS project group at all
common platforms used by the project
participants
End
Mandatory
Optional
</th> </tr> </table>
**5.2. Steps when finishing documents to be published**
During the project duration when finishing documents that are to be published,
either as a public deliverable, or as a journal article, conference
contribution or set of research data, it is important to make sure documents
are uploaded at the right place and the connection to created groups and hubs,
such as _OpenAIRE_ or the project website, is guaranteed. The proceeding to be
followed in that case is shown in Figure 2. It distinguishes between _research
data or other documents and reports_ , _public deliverables_ , _restricted
deliverables_ and p _ublications via conference proceedings, journals or
comparable channels_ . For collecting the public deliverables in the same
place as e.g. research data, a community at _Zenodo_ was created, where public
deliverables can be added additional to their representation on the projects
website, so that it can easily be embedded in the website via _OpenAIRE_ . The
link to this group is _http://s.fhg.de/zenodo_ .
<table>
<tr>
<th>
<table>
<tr>
<th>
Task after the dissemination of a publication and/or research data
</th> </tr> </table>
</th>
<th>
Mandatory
Optional
When a publication
is hosted by an
external
party and
cannot be linked to
OpenAire
,
deposit
the publication
in
your institutional
repository
Upload your
data to
_Zenodo_
and
add it to the ReCiPSS community:
http
:
//
s.fhg.de/zenodo
Add the data
(
from the publishers
repository,
_Zenodo_
or your institutional
repository)
to the
project on
_OpenAIRE_
Add your publication to your
_ORCID_
profile
and update your profile on all platforms
Record your publication in the
Dissemination
plan
Register/Login
at
Zenodo
with your
_ORCID_
ID
Regular review of the
Dissemination
plan
Regular monitoring of
_OpenAIRE_
and the
other platforms
Start
**Project participants**
**WP 8**
End
End
What
research
data
do
you
publish
?
Make sure your original document is
published under a open access compatible
license
and you own the right to publish
it.
Relevant information at:
https://
www.openaire.eu/guides
Research data or other
documents
Send to KTH
project
management
to
be uploaded
on the ReCiPSS
homepage and
sent to the
EC
Deliverable
(
report
)
Publication
Is
it
a
public
deliverable
?
Yes
End
No
</th> </tr> </table>
Apart from this process that can be followed during the project duration when
a document or data need to be published, the following Figure 3 shows the task
of regular data maintenance at _ORCID_ and the open data management plan.
<table>
<tr>
<th>
<table>
<tr>
<th>
Regular tasks during the project
</th> </tr> </table>
</th>
<th>
Regular check and update of your
ORCID profile if necessary
Review and adjustment of the
open data management plan
Start
**Project participants**
**WP 8**
Start
End
End
</th> </tr> </table>
Finally, Figure 4 shows a summarizing chart depicting the different services
used and how they interact with each other, so that the ReCiPSS website can be
used as a central hub to access all relevant information and publications of
ReCiPSS.
Results
:
Publication
Research
data
Deliverable
EC
and
Homepage
Institutional
repository
•
Publications
updated
via
ORCID import
•
ReCiPSS project group
Deliverables
Research
data
Publications
**Publication 1:**
Title, author, participants,
dissemination
date
**Publication 2:**
Title, author, participants,
dissemination date
Embedded
OpenAire
overview
Link to
ResearchGate
project group
http://www.recipss.eu/results
Add to our website with a link
for easy access to
ResearchGate
Include in our
website (HTML) via
embedding code
Supporting research/citation networks
Publication of results (research data and publication)
Website design
\+
In the general overview of Figure 4, the website embeds the project overview
from _OpenAIRE_ and thus grants access to all publications from institutional
repositories as well as data hosted on _Zenodo_ that is correctly linked to
the project. To make sure that besides the project results the discussion in
the peer groups via supporting research and citation networks is kept up-
todate, project partners use their _ORCID_ ID to synchronize information on
all of those platforms. Links to e.g. the project group on _ResearchGate_ or
relevant researcher profiles can also be centrally collected on the project
website along with a register of publications, research data and deliverables
as an overview apart from the actual data accessible via _OpenAIRE_ .
6. **Conclusions**
This document outlines the open data management plan to ensure that the
ReCiPSS project publishes according to the open access strategy of the _EU_
Framework Programme for Research and Innovation _Horizon_ 2020\. As mentioned
earlier, this initial version of the plan can be adjusted in the course of the
project when necessary. Each partner is responsible to consider these
guidelines when publishing foreground to disseminate ReCiPSS in best possible
way.
The open data management plan, as laid out in this document, fulfils the
defined requirements in the following ways depicted in Table 3.
_**Table 3 Requirement fulfilment** _
<table>
<tr>
<th>
_N o _
</th>
<th>
_Requirement_
</th>
<th>
_Service_
</th>
<th>
_Fulfilment_
</th> </tr>
<tr>
<td>
**1**
</td>
<td>
Publicly accessible publication repository for public deliverables
</td>
<td>
</td>
<td>
✔
</td> </tr>
<tr>
<td>
**2**
</td>
<td>
Publication guideline including licence for open access
</td>
<td>
</td>
<td>
✔
</td> </tr>
<tr>
<td>
**3**
</td>
<td>
Link publications from and on external providers to _OpenAIRE_
</td>
<td>
</td>
<td>
✔
</td> </tr>
<tr>
<td>
**4**
</td>
<td>
Describe where to upload research data, such as spreadsheets or source code
</td>
<td>
</td>
<td>
✔
</td> </tr>
<tr>
<td>
**5**
</td>
<td>
Link research data to _OpenAIRE_
</td>
<td>
</td>
<td>
✔
</td> </tr>
<tr>
<td>
**6**
</td>
<td>
Support the scientific exchange on external platforms such as R _esearchGate_
</td>
<td>
</td>
<td>
✔
</td> </tr>
<tr>
<td>
**7**
</td>
<td>
Enable data export and import to ease actuality of data on different platforms
via a central source of information
</td>
<td>
</td>
<td>
✔
</td> </tr>
<tr>
<td>
**8**
</td>
<td>
Use and connect institutional repositories for written documents such as
conference
contributions or journal articles
</td>
<td>
</td>
<td>
✔
</td> </tr>
<tr>
<td>
**9**
</td>
<td>
Support the website as a central information hub on different repositories and
publications
</td>
<td>
</td>
<td>
✔
</td> </tr> </table>
7. **References**
[1] _https://en.wikipedia.org/wiki/Creative_Commons_
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1405_intermin_776642.md
|
# INTRODUCTION: INTERMIN
The H2020-Project INTERMIN will commence in February 2018 lasting a total of
36 months. Its goal is to create a feasible, long-lasting international
network of technical and vocational training centres for mineral raw
materials’ professionals. Specific objectives of the project are to develop
common metrics and reference points for quality assurance and recognition of
training and to create a comprehensive competency model for employment across
the primary and secondary raw materials sector. INTERMIN activities include:
1. To develop an international qualification framework for technical and vocational training programs on mineral raw materials’ topics, based on present and future requirements by employers.
2. To foster joint international training programs by a merger of competences and scope of existing training programmes.
3. To optimise future interaction and collaboration in Europe and internationally.
The project activities require contact with people as well the collection,
analysis, treatment and storage of primary data (data collected by the
Consortium involved in INTERMIN) and secondary data (data collected by others
and published or publically available). INTERMIN also includes the development
of a repository, which consists of a database of documents used and generated
by the project.
# CONTENT AND SCOPE
This deliverable adresses the procedures for gender issues and ethics
The DMP (Data Management Plan) will outline how data are to be handled.
In order to meet EU requirements, this proposed deliverable will address the
following:
* Details on the procedures and criteria that will be used to identify/recruit research participants.
* Detailed information will be provided on the informed consent procedures that will be implemented for the participation of humans.
* In case of data not publicly available, relevant authorisations will be provided, in particular, with regards to the US Government restricted data on college graduates.
* Detailed information on the informed consent procedures that will be implemented in regard to the collection, storage and protection of personal data.
* Detailed information must be provided on the procedures that will be implemented for data collection, storage, protection, retention and destruction and confirmation that they comply with national and EU legislation.
* Detailed information must be provided on the procedures that will be implemented for data collection, storage, protection, retention and destruction and confirmation that they comply with national and EU legislation.
Intermin is construed in accordance with and governed by the laws of Belgium
excluding its conflict of law provisions.
# IDENTIFICATION AND RECRUITMENT CRITERIA
_**7.1 H-REQUIREMENT No. 1-Part 1:** Details on the procedures and criteria
that will be used to identify/recruit research participants _
The INTERMIN network should map skills and knowledge in the EU and the third
countries, identify key knowledge gaps and emerging needs, develop roadmap for
improving skills and knowledge, as well as establish common training
programmes in the raw materials sectors.
Building on existing EGS and NGSOs European and global cooperation in mineral
raw materials, INTEMIN’s overall objective will be to promote and mobilize
European institutions and other international partners in creating future
network to address relevant training. In addition, worldwide institutions and
partners will also be identified and initial contact made with them alerting
them to the upcoming plan to develop an international network of raw materials
training centres.
For each investigation activity details on the procedures and criteria that
will be used to identify/ recruit participants shall be provided. It is at the
participant’s discretion as to whether s/he wishes to participate in the
investigation activity or not. Contact details will be provided, for
participants to contact the Project Consortium for information and decide
whether they wish to join in.
At the moment of the present deliverable (april 2018) the research structure
of intermin comprises:
1. Females/males:
2. Researchers/non-researchers:
## Identification/recruitment procedures
Each partner is able to use the conventional recruitment procedures
implemented in their institution (taking into account a gender balance),
provided that they are not in conflict with the good practice criteria of the
EC.
A basic requirement is language skills. Being recommended a background
knowledge in raw materials.
## Summary of the characteristics of research participants
The criteria for research participants (partners and thirth parties) are the
same as the respondent (language skills, raw materials knowledge).
The use-cases will involve only voluntary participants aged 18 or older and
capable to give consent, who will be informed on the nature of their
involvement and on the data collection/retention procedures through an
informed consent form before the commencement of their participations. Terms
and conditions will be transparently communicated to the endusers by means of
an information sheet including descriptions of: purpose of the research,
adopted procedures, data protection and privacy policies. Please note that all
the research participant will have the capacity to provide informed consent:
i.e., individuals who lack capacity to decide whether or not to participate in
research will be appropriately excluded from research. Finally, INTERMIN
pilots may involve certain vulnerable groups, e.g., elderly people and
immigrants.
# INFORMED CONSENT
_**7.1 H-REQUIREMENT No. 1-Part 2:** Detailed information must be provided on
the informed consent procedures that will be implemented for the participation
of humans _
Participation of individuals in INTERMIN activities will be on voluntary basis
only. Participants in interviews, surveys, workshops, conferences or any other
INTERMIN activities or events will be invited and adequately informed of the
aims and methods of the research. The documentation given to potential
participants should be comprehensible and there should be an opportunity for
them to raise any issues of concern. The documentation must include contact
details of the beneficiary performing the research, allowing participants to
get in touch even after data-gathering has been concluded. The documentation
must also inform the participants on the procedure for making complaints.
Appendix 1 includes a template for a participant information sheet.
A request for consent form will be given to potential participants, before the
start of the research activities. Participants must be informed that they are
free to withdraw consent to participation and leave the event at any time,
without giving any explanation. Participants must fill in and sign the request
for consent form before the start of research activities. Consent is in
writing and records of consent will be maintained. Appendix 2 includes a
template for a consent form.
# AUTHORISATION FOR NOT AVAILABLE DATA
_**7.2 POPD – REQUIREMENT No. 2-Part 1:** In case of data not publicly
available, relevant authorisations must be provided, in particular, with
regards to the US Government restricted data on college graduates. _
According to this requirement Intermin will pay special attention to the
authorisations needed for any data restricted or not publicly available.
# PROTECTION OF PERSONAL DATA
_**7.2 POPD – REQUIREMENT No. 2-Part 2:** Detailed information on the informed
consent procedures that will be implemented in regard to the collection,
storage and protection of personal data data on college graduates. _
INTERMIN will collect data (opinions, information and insight) from
individuals who volunteer to participate in the project research. Protecting
the rights and freedom of opinion and expression of these individuals is an
ethical obligation of the Consortium.
Prior to the commencement of activities that involve the collection of
personal data (e.g. interviews and surveys), regardless of the country in
which the activities will be carried out, the beneficiary responsible for that
activity must submit to the coordinator information on data collection
procedures and accompanying evidence (e.g. copies of the scripts and
questionnaires that will be used, consent forms, local sign off, participant
information sheets). The coordinator will forward those documents to the
Ethics Committee 1 , asking for verification and confirmation that the
activities that are proposed will conform with the ethical guidelines of
INTERMIN and Horizon2020. The Ethics Committee verifies compliance of the
activities with INTERMIN and the H2020 ethical guidelines and issues an
ethical approval for the described activities.
Individuals wishing to voluntarily supply a new document to INTERMIN’s
repository need to request access to the system manager. This is an online
procedure and, together with the password to log in into the repository upload
entry page, prospective contributors will receive information on INTERMIN’s
aims and will be asked to read and confirm a request for consent online form.
Data voluntarily uploaded by contributors into INTERMIN’s repository website
will be checked for copyright holder’s consent and suitability by the INTERMIN
repository Administrators before inclusion in the public database.
To guarantee protection of personal data the following ethical guidelines are
in place:
1. No vulnerable or high risk-groups (e.g. children, adults unable to consent, people in dependency relationships, vulnerable persons) will be addressed by INTERMIN or during the development of INTERMIN;
2. Participation of individuals in INTERMIN activities will be on a voluntary basis only;
3. Persons are only approached in their professional capacity;
4. Participants in interviews, surveys, workshops, conferences or any other INTERMIN activities or events will be invited and informed that they are free to leave the event at any time, without giving any explanation;
5. It is made clear to all participants that they may request that contact ceases and that their contact information is removed from the INTERMIN databases;
6. A description of INTERMIN aims and specific purposes will be provided in advance to potential participants invited to participate in INTERMIN’s activities, together with a request for consent;
7. The minimum possible amount of personal contact data will be collected;
8. The purpose for collecting contact data is to obtain professional opinions and information only and no research will be performed on the contact data;
9. Personal contact data will be considered confidential, will be held securely and will not be disclosed to entities or individuals that do not participate in the INTERMIN Consortium;
10. To safeguard confidentiality of the opinions of participants in INTERMIN activities the disclosure of individual opinions will never be made.
In case of conflicts of interests researchers will inform the INTERMIN Ethics
Committee, ensuring respect to the conduct of the research from inception to
publication.
All Ethics Approval, Informed Consent, and Data Protection documentation will
be centrally held by the Project Coordinator and hence available for audit.
1. A project description sheet will be provided to potential participants together with an invitation explaining also our ethics procedures. In case of an interview, the project description will be sent in advance. The project description will provide information about the general aims of the INTERMIN project and the specific purposes of the workshop/conference/event/survey or interview the person is partaking into.
2. Request for consent will be sent out with the invitation to potential participants. Potential participants are informed that their participation is entirely voluntary and that a refusal to participate will not have any consequences for them. A refusal to participate by the invitee will be accepted under any circumstance. Similarly, request for consent will be sent out with the invitation to potential interviewees and asked again during the interview. A consent request will be included in any surveys undertaken.
3. Informed consent is collected in any case of participation, given exclusively for the specified purpose and it is revocable.
4. Only the name and affiliation of participants will be recorded. Recorded names and affiliations will only be distributed (if required) across delegates of the same event.
5. During electronic communication email addressed will be encrypted to avoid sending personal information to groups of people.
6. All email communication will be archived during the life of the project and the quality assessment time period after finishing the project. This email archive will only be for the purposes of the INTERMIN project.
7. Retention of personal data is limited to “no longer than necessary to achieve the purpose specified”. After this period the personal data will be destroyed. Data destruction includes the destruction of all personal data in printed or virtual form.
8. Stakeholders/delegates will be given the option to not provide their personal information if they wish to do so.
# DATA MANAGEMENT
_**7.2 POPD – REQUIREMENT No. 2-Part 3:** Detailed information must be
provided on the procedures that will be implemented for data collection,
storage, protection, retention and destruction and confirmation that they
comply with national and EU legislation _
## Prodedures for data collection
Primary data collection will be made through surveys, meetings with experts
and interviews. Secondary data collection will be made by desk research of
published or publically available data.
INTERMIN also considers the possibility of receiving data and information from
individuals who voluntarily take the initiative of submitting documents
through INTERMIN’s repository website.
Personal contact data will be collected to facilitate screening of
professional opinions and information. No research will be performed on the
contact data and the minimum possible amount of contact data will be
collected. Contact data will be considered confidential and will not be
disclosed. Prospective participants will be given the option to not provide
their personal information if they do not wish to do so.
Primary data collection will be planned in advance. A project description
sheet will be provided to potential participants together with an invitation
to participate and an explicit request made for written consent (issue of a
consent form). The request for consent form will include a short description
of INTERMIN ethics procedures and the information that participation is
entirely voluntary and that a refusal to participate will not have any
consequences for the prospective participant. A refusal to participate (or to
cease participation without giving reasons) will be accepted by the invitee
under any circumstances.
Individuals wishing to voluntarily supply a new document to INTERMIN’s
repository need to request access to the system manager. This is an online
procedure and, together with the password to login into the repository upload
entry page, prospective contributors will receive information on INTERMIN’s
aims and will be asked to read and confirm a request for consent _via_ an
online form. Data voluntarily uploaded by contributors into INTERMIN’s
repository website will be checked for copyright holder’s consent and
suitability by the INTERMIN repository Administrators before inclusion in the
public database.
## Procedures for data storage, protection and retention
All non confidential documents collected and produced in the course of the
INTERMIN project will be stored and publicly available in INTERMIN’s website
repository. These documents cover the whole scope of the project. Some are
held in digital form (most as PDF files) within the database itself and
available for download. Others are available online in digital form elsewhere
and links are provided to these. Yet others are not available online but may
exist as digital files or on paper within the INTERMIN Observatory or in
institutions or companies elsewhere.
The main storage medium of the INTERMIN Repository is a relational database
table which gives access to all referenced documents. Those documents which
are held online by the INTERMIN Observatory are maintained in a folder within
the website for instant download.
The Repository database provides a set of document discovery tools, allowing
search and browse options. Documents are indexed by metadata elements (field
of interest, geographical and commodity) and by keywords, and searchable
abstracts are provided - often as abstracts or executive summaries taken
directly from the individual documents.
There are three classes of user: "Guest", "Contributor", and "Administrator".
Anybody may visit the Repository as a “Guest” without any need to login (or
provide personal data), and public access is provided to all documents stored
within the Repository. Users are reminded that they must respect copyright and
other intellectual property rights which attach to these documents, many
(indeed most) of which are from third-party sources. Where documents are
available for download from third-party sites, the terms and conditions of
those sites must be respected. “Contributors” are members of the INTERMIN
consortium and others who are accredited as providers of documents to the
database. They have password-controlled access allowing them to add documents
to the database. Such updates must be accompanied by the appropriate metadata,
including keywords, and an abstract for each document. Contributors may add
items (“write access”) to the database but cannot modify or delete existing
entries. “Administrators” are a small number of INTERMIN participants having
full administrative access allowing them to modify the database contents and
structure, to make backup copies, and other similar procedures. Administrators
will also take any action resulting from a complaint made concerning the
accuracy of any existing entry or copyright queries.
The repository will not store any information in "cookies" or other data
elements on users' own computers, other than the database session cookies
which are deleted automatically at the end of each online session, and will
not access any data from any user's computer.
Although the repository website does not include personal data or any data
that are financially sensitive, nonetheless security is a consideration in
terms of the costs of building and maintaining a database which will be of
increasing importance. It is known that using standardised website development
aids such as Wordpress or Joomla can carry security risks which have been
exploited. They also involve unnecessary overheads in terms of redundant
capabilities which incur space and time penalties.
The repository database website has therefore been developed using hand-coded
HTML with only as much ASP and javascript scripting as required to provide the
necessary functionality. It uses an SQL database (Microsoft Access). There are
known risks in SQL, if queries are carelessly coded, allowing unauthorised
access which could compromise the integrity of a database, even to the extent
of deletion of whole tables. INTERMIN uses three ways to mitigate these risks:
(1) allowing write access ONLY to a restricted list of accredited users who
must log in using randomised passwords; (2) coding the SQL queries in such a
way that any attempt to use SQL query exploits will be foiled. The principal
ways in which this is achieved are by using selection lists, by limiting query
length, and by filtering queries to block malware. These checks are carried
out, as appropriate, using server-side ASP scripting before access to the
database, and by using ASP-scripted tests that explicitly check the queries
against retrieved data instead of applying the queries directly to the
database. There is an efficiency penalty in this second option, but it is not
anticipated that the INTERMIN database will grow to such an extent that this
will be a serious constraint; (3) taking regular backup copies of the entire
database. This will limit the extent of any damage in the event that the
security is breached or that the server itself becomes corrupted or otherwise
unavailable. INTERMIN uses a regular backup procedure, with additional special
backups after each major update cycle.
INTERMIN’s confidential documents (e.g. personal data collected in project
activities) will be stored in secure computers, with access only by the
immediate research team, and never in shared servers or cloud storage systems.
All documents publically available in INTERMIN’s website repository will be
retained during the life time of the project and handed over to the European
Union’s International Observatory for Raw Materials, who will manage the
Repository in the future. The business planning for the permanent Observatory
will build in the same set of data security safeguards as are being
implemented during the INTERMIN project.
All confidential data collected in the course of INTERMIN will be deleted
unless subject to specific EU regulatory requirements. In that case it will be
handed over to the European Union’s International Observatory for Raw
Materials with a receipt that must be signed by responsible parties of both
entities, transferring to the European Union’s International Observatory for
Raw Materials the responsibility for retaining in safe storage and managing
the confidential data.
All Ethics Approval, Informed Consent, and Data Protection documentation will
be centrally held by the Project Coordinator and hence available for audit.
## Procedures for data destruction
When confidential data is no longer needed (or, at latest, at the conclusion
of the project term, unless subject to specific EU regulatory requirements.)
it should be disposed of in a secure and responsible manner. If data is on
paper records, these must be destroyed using a cross-cut shredder. If data is
on floppy disks, hard drives, memory sticks, CDs, DVDs or mobile devices it
should be erased and media must be overwritten with random data. If this is
not possible media must be physically destroyed through pulverising or
crushing.
# IMPORTED/EXPORTED PERSONAL DATA
_**7.3 NEC – REQUIREMENT No. 4:** The applicant must provide details on the
personal data which will be imported to/exported from EU and provide evidence
that authorisations have been applied for _
When personal data is transferred outside the European Economic Area, special
safeguards are foreseen to ensure that the protection travels with the data
(adequacy decisions, standard contractual rules, binding corporate rules,
certification mechanism, codes of conduct, so-called "derogations" etc).
**Directive 95/46/EC, Article 25:**
1. The Member States shall provide that the transfer to a third country of personal data which are undergoing processing or are intended for processing after transfer may take place only if, without prejudice to compliance with the national provisions adopted pursuant to the other provisions of this Directive, the third country in question ensures an adequate level of protection.
2. The adequacy of the level of protection afforded by a third country shall be assessed in the light of all the circumstances surrounding a data transfer operation or set of data transfer operations; particular consideration shall be given to the nature of the data, the purpose and duration of the proposed processing operation or operations, the country of origin and country of final destination, the rules of law, both general and sectoral, in force in the third country in question and the professional rules and security measures which are complied with in that country.
3. The Member States and the Commission shall inform each other of cases where they consider that a third country does not ensure an adequate level of protection within the meaning of paragraph 2.
4. Where the Commission finds, under the procedure provided for in Article 31 (2), that a third country doesnot ensure an adequate level of protection within the meaning of paragraph 2 of thisArticle, Member States shall take the measures necessary to prevent any transfer of data of the same type to the third country in question.
5. At the appropriate time, the Commission shall enter into negotiations with a view to remedying the situation resulting from the finding made pursuant to paragraph 4.
6. The Commission may find, in accordance with the procedure referred to in Article 31 (2), that a third country ensures an adequate level of protection within the meaning of paragraph 2 of thisArticle, by reason of its domestic law or of the international commitments it hasentered into, particularly upon conclusion of the negotiations referred to in paragraph 5, for the protection of the private livesand basic freedoms and rights of individuals. Member States shall take the measures necessary to comply with the Commission's decision.
# ETHICAL APPROVALS FOR THE COLLECTION OF PERSONAL DATA
## Approval process
Prior to the commencement of activities that involve the collection of
personal data (e.g. interviews and surveys), regardless of the country in
which the activities will be carried out, the beneficiary responsible for that
activity must submit to the coordinator information on data collection
procedures and accompanying evidence (e.g. copies of the scripts and
questionnaires that will be used, consent forms, local sign off, participant
information sheets). The coordinator will forward those documents to the
Ethics Committee, asking for verification and confirmation that the activities
that are proposed will conform with the ethical guidelines of INTERMIN and
Horizon2020.
The Ethics Committee will verify compliance of the activities with INTERMIN
and the H2020 ethical guidelines. If the Ethics Committee considers compliance
is assured it will issue, as soon as practicable, an ethical approval for the
aforementioned activities. In case the Ethics Committee considers compliance
is not assured it will inform the Coordinator as soon as possible, describing
which activities/tasks are not in line with INTERMIN and the H2020 ethical
guidelines and proposing corrective measures.
After receiving feedback from the Ethics Committee the Coordinator will inform
the beneficiary responsible for the activity, which can start the task (if the
ethical approval has been issued) or make the necessary corrective measures
and resubmit to the coordinator the revised procedures. In this case the
coordinator will convey the information to the Ethics Committee who will
repeat the compliance verification process. The figure below illustrates the
submission and approval process for INTERMIN ethically relevant activities.
All Ethics Approvals and associated documentation will be centrally and
securely held by the project coordinator and hence available for audit.
## Ethics comitee
The Ethics Committee comprises a representative of each Work Package leader
and the members of the Advisory Board.
In case any of the members of the Ethics Committee has an impediment he/her
shall inform the Chair and the project coordinator, who will appoint a
substitute. Given the nature of the activity of the Ethics Committee there are
no established routine meetings and the intervention of the Ethics Committee
will take place whenever requested by the project coordinator. The
communications within the Ethics Committee will be managed by the Chair. The
Chair’s decision will prevail in case of a tied vote between the
representatives of the INTERMIN Management Committee.
# ETHICS REQUIREMENTS FOR NON-EU COUNTRIES
The INTERMIN project involves the participation of technical experts and
professionals who participate in professional workshops to provide their
technical assessments and consultation and technical expertise. Other than
contact information, no personal information will be collected from
individuals who participate in the project. No human research subjects will be
used. Data storage, protection, retention and destruction will obey specific
rules. INTERMIN will comply with the Horizon 2020 ethical standards and
guidelines and the EU directive on data protection ( _Directive 95/46/EC on
the protection of individuals with regard to the processing of personal data
and on the free movement of such data_ ) and with any updates it might receive
during the life time of the project.
INTERMIN includes the following beneficiaries from non-EU countries:
1. Coordinating Committee for Geoscience Programmes in East and Southwest Asia;
2. American Geological Institute;
3. The University of Queensland.
The ethical standards and guidelines of Horizon2020 for social or human
sciences research will be rigorously applied by the beneficiaries and linked
third parties of the INTERMIN project, regardless of the country in which the
research is carried out.
# APPENDIX 1: PARTICIPANT INFORMATION SHEET
INTERMIN PARTICIPANT INFORMATION SHEET
_This document must be adapted to specific aspects of the research activities
to be developed. Technical and academic terms should be avoided and the
language should be plain. The information may be provided in bullet points or
in the question-answer format, and the items below should be mentioned._
1. Title of Work Package;
2. Name of the activity;
3. Short description of INTERMIN and the overall aim of the activity;
4. Criteria for the selection of participants;
5. Short description of the methodology that will be used;
6. The place (if applicable), and the expected duration of the activity;
7. How research findings will be used (reports, publications, presentations);
8. Name and contact of the responsible for the activity;
9. Statement of confidentiality and details of coding system to protect the identity of participants;
10. Information that participation is totally voluntary and the participants are free to withdraw;
11. Procedure to contact with any concerns or complaints.
# APPENDIX 2: INTERMIN PARTICIPANT CONSENT FORM
INTERMIN PARTICIPANT CONSENT FORM
Title of Work Package _____________________________
Name of the activity ______________________________
YES NO
1. I have read the Information Sheet for this activity and have had details of the research □ □ explained to me.
2. My questions about the INTERMIN and the research activity have been answered to my □ □ satisfaction and I understand that I may ask further questions at any point.
3. I understand that I am free to withdraw from this activity at any time without giving a reason □ □ for my withdrawal or to decline to answer any particular questions without any consequences to my future treatment by the researchers.
4. I wish to participate in this INTERMIN activity under the conditions set out in the Information □ □ Sheet.
5. I agree to provide information to the researchers under the conditions of confidentiality set □ □ out in the Information Sheet.
Name ______________________________
Signature ______________________________
Date ______________________________
Contact details (email; phone) _____________________________________
Researcher’s name ______________________________
Researcher’s signature ______________________________
Researcher’s contact details (email; phone)
_____________________________________
_Please keep your copy of the consent form and the information sheet
together._
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1407_S2S4E_776787.md
|
# Introduction
## Purpose of the document
The objective of this document is to define policies and technical solutions
about the data collected and generated during the project, across the
different work packages, the data being scientific or not (personal data,
results from surveys, etc.). The data presented in this document will be
categorized in two.
The first category is the **climate data** , that go from the raw model and
observations data (described more thoroughly below) to the final plots of
essential climate variables presented in the website of the project.
The second category is the **personal data** and refers to names, emails,
addresses of the users and the answers given during the surveys.
## Applicable and reference documents
The applicable documents are listed in the table below:
<table>
<tr>
<th>
**Id**
</th>
<th>
**Deliverables**
</th>
<th>
**WP**
</th> </tr>
<tr>
<td>
D8.1
</td>
<td>
POPD - Requirement No. 1: Information sheet of users
</td>
<td>
WP8
</td> </tr>
<tr>
<td>
D8.2
</td>
<td>
POPD - Requirement No.2: Guideline for personal data management
</td>
<td>
WP8
</td> </tr> </table>
# Glossary/Definitions
**_Reanalysis:_ ** systematic approach to produce datasets for climate
monitoring and research. Reanalyses are created via a data assimilation scheme
and model(s) which ingest a certain set of observations. The resulting dataset
will be referred to as “reanalysis” in this document.
**_Grib format:_ ** GRIB (GRIdded Binary or General Regularly-distributed
Information in Binary form) is a concise data format commonly used in
meteorology to store historical and forecast weather data. It is standardized
by the World Meteorological Organization's Commission for Basic Systems, known
under number GRIB FM 92-IX, described in WMO Manual on Codes No.306. Currently
there are three versions of GRIB. Version 0 was used to a limited extent by
projects such as TOGA, and is no longer in operational use. The first edition
(current subversion is 2) is used operationally worldwide by most
meteorological centres, for Numerical
Weather Prediction output (NWP)
**_NetCDF format:_ ** NetCDF (Network Common Data Form) is a set of software
libraries and selfdescribing, machine-independent data formats that support
the creation, access, and sharing of array-oriented scientific data. NetCDF is
commonly used to store and distribute scientific data. The latest version of
the NetCDF format is NetCDF 4 (also known as NetCDF enhanced, introduced in
2008), but NetCDF 3 (NetCDF classic) is also still widely used.
# Data collection
## Type of data collected
### Original data collected
Due to the characteristics of the activities carried out within the S2S4E
project, the data managed are classified in two types: scientific and personal
data. Both types will be managed according to the open data policy recommended
by the European Commission and the General Protection data Regulation (GDPR)
respectively.
It is expected not only to collect data for the correct development of the
research activities, but also to create different types of data as outcomes of
the project.
#### Climate data
During the project, the main source of climate data used will be a collection
of model outputs, reanalysis, and observations.
For model outputs, the data used will be from ECMWF 1 and MeteoFrance 2
system 5 (SEAS5) downloaded from the Copernicus Climate Change Service (C3S),
NCEP CFS version 2 3 , the National MultiModel Ensemble (NMME 4 ), and
POAMA from the Australian Bureau of Meteorology (BOM 5 ). These files will
be downloaded in their native format (grib for ECMWF, netCDF for the other
ones). They will be used by work packages 4 (S2S climate predictions) and 5
(Operational Climate Service: Decision Support Tool (DST) implementation).
The reanalysis data will be used by WP 3 (Observational datasets), 4 and 5 and
will consist of the: ERA5 6 , ERA-Interim 7 data (from ECMWF), NCEP/NCAR
reanalysis 8 , MERRA-2 9 and the Japanese ReAnalysis data (JRA55 10 ).
They will be downloaded in their original grib format and processed
afterwards.
The observational data will be used by work package 3. The exact datasets used
will be determined further in the project, but in a first time, the data from
wind “tall towers” will be used. This data is a collection of individual
observations on wind farms spread over the world, coming into different
formats that can be grib, NetCDF or csv.
A more detailed table about the climate data that will be used within the
project is available in this live document:
This table ( _https://tinyurl.com/ydes27le_ ) presents the different
datasets, with links, their spatial and temporal resolutions, their time
coverage, available variables, number of members, etc.
#### Personal data
Personal data managed in this project are only related to professional
information (name, position, firm, e-mail and phone number). Besides
professional information, pictures, videos and audio recordings could be
gathered always with the informed consent of the people involved.
This information will be collected by work packages 2 (Definition of user
needs and the role of
S2S forecasts in decision-making processes), 5 (Operational Climate Service:
Decision Support Tool Implementation), 6 (Positioning, exploitation and
business models), 7 (Dissemination, communication and user engagement) and 8
(Ethics requirements).
Personal data management is described in deliverables D8.1 Requirement No.1:
Information sheet of users and D8.2 Guideline for personal data management.
Although not directly considered Personal data, all the knowledge associated
to
#### Other projects outcomes reports
Knowledge and experience from other projects might be the third type of data
collected by the project (besides climate and personal data). The collection
of this data will be done by WP 2, 3, 4, 5 and 6 and might be used to inform
the work of these WPs and can be used in some deliverables. The format of this
data is usually in the form of deliverables, reports or technical notes in pdf
format.
### Data created during the project
During the project, from the “raw/original” data mentioned in the previous
section, some indicators, tailored products, summaries and forecasts outlooks
will be created by the different work packages. This data will mainly consist
in netCDF files following the CF 11 conventions.
Work packages 2 to 7 will be involved in this process of data creation.
The interaction with energy users and other stakeholders will gather the
personal data of the people with whom the project interacts for practical
logistics. Besides collecting personal data, other other information obtained
with this interaction with users will mainly consist in the detailed feedback
received through participatory activities. The format of exchange of this data
will be determined later in the project according to the collection method,
and will be updated in the next versions of the Data Management Plan. Personal
data together with the user feedback gathered by the DST will be stored in a
secure environment at the BSC.
## Data collection method
To collect the data mentioned above, different technical solutions will be
used.
For the user’s feedback and personal data, the information will be gathered
through participatory activities held along the project (surveys, on-line
forms, interviews, workshops, events and face-to-face meetings).
For the climate data, the technical infrastructure and solutions will be
defined and presented in D5.4 (Architecture Design Document and Interface
Design Document). Mainly, this will consist in ssh connections to the remote
data servers and retrievals through ftp or the webapi from the ECMWF.
# Documentation and Metadata
## Climate data standards and metadata
Regarding climate data standards, a guideline during the project is to comply
to the data standards established in the geospatial data community (like CF
conventions). The data will be formatted in NetCDF following the INSPIRE
Directive standards 12 . As the proposal aims at using data from existing
platforms (Copernicus, etc.) the original data standards will be kept for
these data and adapted to the needs of S2S4E for the post-processed data.
Regarding personal data, the documentation and methodology will be further
defined in WP8 Ethics requirements and reported in the WP’s specific
deliverables and milestones, as well as in the next versions of the DMP.
## Git branching strategy
In order to keep track of the different processes that led to the generation
of the different data during the project, the source codes of the different
softwares used by the DST will be kept and managed with a version control
system (i.e.: git server hosted at BSC) and a well-defined branching strategy
according to the Agile method development based on steps called sprints.
This strategy is based on five active branches along the project:
* The master branch contains the last stable sprint version.
* The daily branch contains all feature developments that are done during the current Sprint but not fully validated.
* The common branch contains all common developments shared between multiple features.
* The qualif (qualification) branch contains all features that are done and validated.
* The prod (production) branch is the main branch for production, it contains the version currently in production.
Each feature developed during the sprint is done in a specific branch named as
the user story.
Usually, it's a fork of the master branch.
When the feature is done (development, unit tests, integration tests), it is
merged into daily branch to be validated.
When the feature is successfully validated, it is merged into the qualif
branch.
A feature could be a fork of qualif branch if it needs another feature
developed during the sprint.
Sometimes, a feature development requires to add or modify common projects or
parent pom files. In this case, either the development is done in the user
story branch and reported into the common branch, or it is directly done into
the common branch.
Once this change have been pushed into the remote common branch, the developer
has to "ring the bell" that warns all developers to merge the common branch
into their own branch.
Some days before the end of the sprint, a validation is done with all features
merged into qualif branch to test the integrity of the software. During this
phase, developers should react quickly to correct any raised bugs to ensure a
stable version. Every step of the processing will be tested and released only
when successfully tested. Technical details of the different checks will be
given in D5.4 (DST Architecture Design Document and Interface Design
Document).
At the end of the sprint, the qualif branch is merged into the master branch
that represents the next stable release.
The unfinished features are reported to the next sprint.
This master release could be merged into the prod branch that is deployed into
production environment.
# Ethics and Legal Compliance
The main ethical aspects that can affect research activities of the project
are linked to participatory methods and are related to a) personal data
protection, b) data confidentiality and c) informed consent. These issues are
better detailed in deliverables D8.1 Information users sheet and D8.2
Guidelines for personal data management. For any report on results from
participatory activities, the main ethical aspects will be reported in a
specific section of the report indicating procedures and solutions (consent
forms, anonymization, encryption of answers if necessary, etc.).
All Intellectual Property Rights (IPR) issues will be organized, managed and
discussed within the Innovation Management Board (IMB) (to know more about
this board, see deliverable D1.4 Composition and terms of reference of the
Innovation Management Board (IMB) and External Advisory Board (EAB)).
Original licences from climate data will be preserved. Software developed
during the project will have identified owners and licences. Further details
will be given in the next version of the DMP when all the different components
of the DST have been selected.
# Storage and Backup
During the development phase of the project, some data samples will be
downloaded individually by the different partners from the original data
sources and stored in their individual data storages.
For the operational part of the DST, the climate data (files and databases)
will be stored at BSC datacentre (both data and metadata) and at SMHI for the
hydrological forecast. The data will be stored at BSC on a shared GPFS file
system managed by the Operations department from BSC. The storage is mounted
in GPFS Native RAID ensuring reliability, availability, data protection and
recovery.
# Selection and Preservation
The results of the participatory activities will be preserved as well as the
algorithms used and the software (in gitlab). Depending on the data, an
expiration time will be set. These expiration dates will be defined later
during the project when decisions have been made on all the datasets used by
the different WPs.
# Data Sharing
Following the EC recommendations on data sharing, the data generated within
S2S4E (energy indicators, derived variables, diagnostics, etc.) will be freely
(with registration) accessible through the DST (Open data policy) following
the same policies as the input data from the sources Copernicus Climate
Change, NMME, S2S Project and other sources.
The results of the participatory activities will be shared through the project
wiki or more secure channels according to the requested confidentiality level
of the information provided by the participants.
# Responsibilities and Resources
The BSC, as project coordinator, will be responsible for the data management
of the project, as well as Capgemini, The Climate Data Factory and Cicero.
Regarding personal data, each institution in the consortium is responsible for
the management of the data following the requirements of the General Data
Protection Regulation as explained in D8.1 POPD - Requirement No. 1:
Information sheet of users and D8.2. POPD - Requirement No.2: Guideline for
personal data management.
The following table shows the total efforts foreseen for the DMP and its
updates along the project lifetime. It corresponds to task 5.1 Data Management
Plan - definition of data protocols and formats.
**Table 1: Costs assigned to data management in S2S4E**
<table>
<tr>
<th>
</th>
<th>
PM
</th>
<th>
Total costs
</th> </tr>
<tr>
<td>
**BSC**
</td>
<td>
3
</td>
<td>
13,500.00€
</td> </tr>
<tr>
<td>
**TCDF**
</td>
<td>
1
</td>
<td>
6,600.00€
</td> </tr>
<tr>
<td>
**Capgemini**
</td>
<td>
3
</td>
<td>
22,875.00€
</td> </tr>
<tr>
<td>
**TOTAL**
</td>
<td>
**7**
</td>
<td>
**42,975.00 €€**
</td> </tr> </table>
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1408_S2S4E_776787.md
|
# Summary
This document is the second revision of the Data Management Plan. The first
version was delivered in D5.2 Data Management Plan (DMP) presentation at M6,
the third one is due at M36 in D5.4. Data Management Plan (DMP) project. The
purpose of this document is to present the management of data in the scope of
the S2S4E project.
# Keywords
Data, privacy, preservation, open access, storage
**About S2S4E**
The project seeks to improve renewable energy variability management by
developing a tool that for the first time integrates sub-seasonal to seasonal
climate predictions with renewable energy production and electricity demand.
Our long-term goal is to make the European energy sector more resilient to
climate variability and extreme events.
Large-scale deployment of renewable energy is key to comply with the emissions
reductions agreed upon in the Paris Agreement. However, despite being cost
competitive in many settings, renewable energy diffusion remains limited
largely due to seasonal variability. Knowledge of power output and demand
forecasting beyond a few days remains poor, creating a major barrier to
renewable energy integration in electricity networks.
To help solve this problem, S2S4E is developing an innovative service to
improve renewable energy variability management. The outcome will be new
research methods exploring the frontiers of weather conditions for future
weeks and months and a decision support tool for the renewable industry.
More information:
www.s2s4e.eu
# Introduction
## Purpose of the document
The objective of this document is to review the policies and technical
solutions about the collected and generated data during the project, across
the different work packages, the data being scientific or not (personal data,
results from surveys, etc.). In the following pages, we only consider new
developments or adjustments compared with the first version of the DMP (D5.2,
M6). The general definitions and policies are in the previous deliverable.
## Applicable and reference documents
## The applicable documents are listed in the table below:
<table>
<tr>
<th>
Id
</th>
<th>
Deliverables
</th>
<th>
WP
</th> </tr>
<tr>
<td>
D5.2
</td>
<td>
Data Management Plan
</td>
<td>
WP5
</td> </tr>
<tr>
<td>
D8.1
</td>
<td>
POPD – Requirement No. 1: Information sheet of users
</td>
<td>
WP8
</td> </tr>
<tr>
<td>
D8.2
</td>
<td>
POPD – Requirement No.2: Guideline for personal data management
</td>
<td>
WP8
</td> </tr> </table>
# Data collection and storage
## Type of data collected
As described in the previous version of the DMP (deliverable D5.2), the data
collected during the project will be of two types: scientific data and
personal data. The personal data corresponds to the logins to the DST and the
information provided by the users during the surveys conducted by WP2
Definition of user needs and the role of S2S forecasts in decisionmaking
processes. Regarding the Decision Support Tool (DST) users’ data, it is
foreseen to
collect them after the launching of the DST following the data protection
policy.
## Data collection method and results
During the reporting period, only scientific data has been collected, using
ssh connections and the web api from the European Centre for Medium-Range
Weather Forecasts (ECMWF).
So far, around 4TB of data has been collected from the following datasets:
era5 reanalysis, ECMWF syste5m C3S, NCEP CFS v2 reanalysis, for variables
surface temperature, minimum and maximum temperature, sea level pressure,
surface wind module, precipitation, solar surface radiation. The period
downloaded is from 1993 to 2019 (ECMWF System5), 1999 to 2019 (NCEP) and 1993
to 2018 (era5). All files are downloaded at the highest temporal frequency
available (1hourly for era5, 6hourly for the other systems) and weekly and
monthly means are computed from them to produce the final data ingested by the
DST.
All this data is kept in the GPFS Native RAID storage at BSC described in the
previous deliverable D5.2.
# Data sharing
For easy exchange of small temporary scientific data between partners, EUDAT
B2DROP has also been used. This allows easy access to the data, and ensures
the long term availability of the data without having to worry about giving
access to the storage of the partners.
The automatic download of the hydrological data from SMHI to the BSC archive
for further ingestion in the DST has now been set up and will be done through
an ftp server. The expected data size is and the data will be downloaded once
a week for the subseasonal and once a month for the seasonal. The subseasonal
will represent 100MB per week and the monthly data 100MB per month.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1409_VERIFY_776810.md
|
1\. Introduction 5
1.1. Purpose of the document 5
1.2. Background of VERIFY 5
1.3. Guideline: Open Research Data Pilot 6
1.4. Scope of the deliverable 6
2\. VERIFY data summary 6
2.1. VERIFY products 6
2.2. Data Questionnaire 8
3\. FAIR Data 10
3.1. Making data findable, including provisions for metadata 10
3.2. Making data openly accessible 10
3.3. Making data interoperable 11
3.4. Increase data re-use 11
4\. General Data Management issues 12
4.1. Allocation of resources 12
4.2. Data security 12
4.3. Ethical aspects 12
5\. VERIFY data sets 13
5.1. WP1 data 13
5.2. WP2-3-4 data 13
5.3. WP5-6 data 14
5.4. WP7 data 14
# Introduction
## Purpose of the document
The Data Management Plan (DMP) is a key element for the overall management of
the VERIFY project. It describes the data management life cycle for the data
to be collected, processed and generated by VERIFY. This document describes
how the VERIFY consortium intends to make its research data findable,
accessible, interoperable and reusable (FAIR), to ensure it is soundly
managed. The present release of the Deliverable includes a collection of the
first ideas from the VERIFY partners, but it is only to be seen as an initial
version. As the project will progress – and according to the Description of
Action, the consortium will produce a second (to be issued in Month 18), a
third (to be issued in Month 36), and a fourth and final release (to be issued
at the end of the project) in order to include the applied procedures in terms
of methodology and the exhaustive group of data collected, processed and
generated by the project.
As part of making research data FAIR, the deliverable will include information
(following the successive revisions) on:
* The data that will be collected processed and generated
* The handling of research data during and after the end of the project
* The methodology and standards that will be applied
* If data will be made open access and why
* How data will be curated and preserved (including after the end of the project).
## Background of VERIFY
VERIFY, as a Research and Innovation project, is bringing together an ensemble
of European partners, including national inventory agencies, research
institutes, in situ infrastructures (such as ICOS), operational centers
(ECMWF), a company providing consultancy and management services (ARTTIC) and
international organizations. The primary objective is to design a
preoperational system to estimate GHG fluxes from an ensemble of atmospheric
and surface observations combined with state of the art “atmospheric-
inversion” and “ecosystem models”.
VERIFY partners are at the forefront of developments in the compilation of
emission inventories, the observation of the carbon/nitrogen cycle from
ground-based and satellite measurements, the process modeling of the
carbon/nitrogen cycle, atmospheric transport modeling, and data assimilation
and inversion systems. There will be four main areas of work covering:
observations, emission inventories, ecosystem modeling and inversion systems.
The work will target four specific objectives that will lead to the production
of various data streams and information:
1. Integrate the efforts between the research community, national inventory compilers, operational centers in Europe, and international organizations towards the definition of future international standards for the verification of GHG emissions and sinks based on independent observation.
2. Enhance the current observation and modeling ability to accurately and transparently quantify the sinks and sources of GHGs in the land-use sector for the tracking of landbased mitigation activities.
3. Develop new research approaches to monitor anthropogenic GHG emissions in support of the EU commitment to reduce its GHG emissions by 40% by 2030 compared to the year 1990.
4. Produce periodic scientific syntheses of observation-based GHG balance of EU countries and practical policy-oriented assessments of GHG emission trends, and apply these methodologies to other countries.
## Guideline: Open Research Data Pilot
This DMP has been prepared by taking into account the template of the
“Guidelines on Data Management in Horizon 2020”
(http://ec.europa.eu/research/participants/data/ref/h2020/grants_manual/hi/oa_pilot/h20
20-hi-oadata-mgt_en.pdf). The elaboration of the DMP will allow VERIFY
partners to address all issues related to data handling and ethics.
We will follow the Open Research Data Pilot, which aims to improve and
maximize access to and re-use of research data generated by VERIFY. Following
the Guidelines to the “Rules on Open Access to Scientific publications and
Open Access to Research Data in Horizon 2020”, we refer to Research Data as
information and in particular facts or numbers, collected to be examined and
considered as a basis for reasoning, discussion, or calculation.
## Scope of the deliverable
The deliverable D8.6 Data Management Plan provides the initial outline of the
data management plan at month 6 of the project, including information on which
data sets will be created in the project, how they will be stored and
distributed within and outside the project and which datavisualization
facilities will be provided. This document represents only the initial version
of the DMP and it will be further developed over the course of the project.
# VERIFY data summary
## VERIFY products
Following the Description of Action of VERIFY, the products of VERIFY will
comprise reports, graphical displays, datasets (gridded and region-specific),
algorithms and code. VERIFY will deliver six major types of product to a large
community:
* **Product 1:** Regional details of GHG anthropogenic emissions and sinks across the EU, on a 10 km grid for bottom-up models and 50 km for top-down inversions, with full documentation on algorithms and uncertainties.
* **Product 2:** Attribution of GHG fluxes in the land-use sector to management (fertilizers use, forest management, agricultural practices) versus climate drivers (climate change, rising CO 2 and nitrogen deposition).
* **Product 3:** Annually updated observation-based national GHG budgets of the EU countries for CO 2 , CH 4 and N 2 O.
* **Product 4:** Annual synthesis and reconciliation of the GHG budgets of EU countries between observation-based estimates and UNFCCC inventories performed with representatives of national inventory agencies.
* **Product 5:** Full documentation, system requirements, and implementation recommendations to operationalize the methodology of the project for the routine update of GHG budgets verified by independent observations.
* **Product 6:** Synthesis of observation-based estimates and UNFCCC GHG budgets for China, US, Indonesia, performed with foreign academic and institutional partners, based on methodologies from the project.
The datasets associated to these generic Products will target a wide user
community to support them with parallel or alternative studies. Most data
products of VERIFY will be made publicly available to maximize the uptake by
the climate scientific community, the GHG inventory community and more
generally all stakeholders.
We envisage making use of existing data portals to ensure full visibility of
the datasets and to partly rely on these portals to store, distribute and
visualize the data. The exact framework is not finalized but we will rely on
three different infrastructures - all being developed by partners of VERIFY:
i) the ICOS Carbon portal, ii) the Global Carbon Atlas from the GCP, and iii)
the Climate Data Store (CDS) from the Copernicus program.
Table 1 below lists the main category of data sets that will be produced by
VERIFY and thus stored on a database with means to access them and visualize
their content (i.e., maps, time series, histograms, etc).
**Table 1: Main categories of output data sets from VERIFY.**
<table>
<tr>
<th>
**Context / Type of data**
</th>
<th>
**Model/Observation types**
</th>
<th>
**Application**
</th>
<th>
**Output fields**
</th> </tr>
<tr>
<td>
Continental (EU) gridded data
products
</td>
<td>
Inventories of GHG emissions at high spatial and temporal resolution
Output of regional atmospheric inversion and land/ocean
ecosystem models
</td>
<td>
Raw output data that will be used for the synthesis of GHG emissions and sinks
estimates at country level
</td>
<td>
Fluxes of CO 2 , CH 4 , N 2 O
Land carbon stocks (above and below
ground)
</td> </tr>
<tr>
<td>
Global gridded data products
</td>
<td>
Output of global atmospheric inversions and from global land/ocean ecosystem
models
</td>
<td>
Global scale products used to complement the
European data sets and to provide fluxes of GHG for other regions than Europe
(China, US, Indonesia for
instance)
</td>
<td>
Fluxes of CO 2 , CH 4 , N 2 O
</td> </tr>
<tr>
<td>
Region-specific time series (fact sheet)
</td>
<td>
Synthesis of the raw gridded data sets in terms of country total annual GHG
fluxes (anthropogenic and
natural
</td>
<td>
Main output of VERIFY to be compared with UNFCC inventory estimates of
GHG emissions
</td>
<td>
Fluxes of CO 2 , CH 4 and N 2 O per country on an
annual basis.
Fluxes for European countries and also
</td> </tr>
<tr>
<td>
Software managed under a “content management software” (GitHub).
</td>
<td>
Software to be used by the climate scientific community: Community Inversion
Framework
(CIF)
</td>
<td>
Community inversion system to be used by all climate research group to produce
independent, traceable and transparent GHG flux estimates from atmospheric
inversions.
</td>
<td>
Software that produce fluxes of GHG, either
CO 2 , CH 4 or N 2 0 depending on the implementation and application of
the CIF.
</td> </tr>
<tr>
<td>
Synthesis / Reports on GHG emissions
and sinks
</td>
<td>
Synthesis of different model estimates and observations
</td>
<td>
Report for policymakers and the inventory agencies, describing strengths and
weaknesses (with error assessment) of the data-driven GHG flux estimates.
</td>
<td>
Flux estimates with critical analysis of the associated
uncertainties;
analysis of the difference between UNFCC statistics and
VERIFY products
</td> </tr> </table>
## Data Questionnaire
A questionnaire has been prepared in order to gather from each VERIFY partners
information on the data that they will provide. The Questionnaire is
accessible online at: _https://form.jotformeu.com/80422120148342_
This information will be further synthesized by work packages and evaluated in
order to implement the appropriated Data Management Plan. Unfortunately the
Questionnaire was only released during the 2018 summer period; most partners
have thus not completed it yet.
**Table 2: Data Management Plan Questionnaire for VERIFY**
<table>
<tr>
<th>
**Main class of data management question**
</th>
<th>
**Set of specific questions**
</th> </tr>
<tr>
<td>
**Data set description**
</td>
<td>
1. Specific questions to describe the data provider and the associated task: [WP, Task, first name, last name]
2. Description of the data set with specific Keywords including which GHG [CO 2 , CH 4 , N 2 0], the type [Flux, stock, concentration], the nature [anthropogenic, natural], the origin [land based, lake-river based, ocean based, atmosphere based], the method [atmospheric inversion, land ecosystem model, ocean ecosystem model]
3. Data qualification [spatially gridded data, integrated data, site data, etc.]
4. Data Size Description: how many dimensions are used? to evaluate the size of the data
5. Information on the existence (or not) of similar data, the number of releases intended
</td> </tr>
<tr>
<td>
**Data access and associated metadata**
</td>
<td>
1. Initial data access [from a Threeds server, from a specific ftp site, etc.]
2. Suggestion of additional metadata on the top of required ones [owner, creator, date of creation, start and end date, data coverage].
3. Are there ancillary data [errors estimates, etc.] attached to your data?
</td> </tr>
<tr>
<td>
**Data sharing**
</td>
<td>
1. Description of how the data will be shared [widely open or restricted to VERIFY partners] during and after the project.
2. In case the dataset cannot be shared, the reasons for this should be mentioned (e.g. ethical, rules of personal data, intellectual property, commercial, privacy-related, security-related).
3. Access URL?
</td> </tr>
<tr>
<td>
**Data visualization**
</td>
<td>
1) Services to be associated with the data [online search and download, Maps
visualization, Time series visualization, etc.]?
</td> </tr>
<tr>
<td>
</td>
<td>
2) Visualization type that are expected: [static, interactive, live documents
(notebook)]
</td> </tr> </table>
# FAIR Data
The main objective is to manage all data produced by VERIFY following the
Findable Accessible Interoperable and Reusable (FAIR) policy. For that we will
make use of existing expertise in infrastructure such as i) the ICOS Carbon
Portal (https://www.icos-cp.eu/), ii) the Carbon Atlas
(http://www.globalcarbonatlas.org/en/content/welcome-carbon-atlas) of the
Global Carbon Project and iii) the Climate Data Store
(https://cds.climate.copernicus.eu/#!/home), which is currently under
development by the Copernicus Climate Change Service (C3S). Copernicus climate
data store.
## Making data findable, including provisions for metadata
We will make sure that all data produced in the project will be discoverable
with specific metadata including standard “qualifiers” of the discipline (see
example of metadata proposed in the Questionnaire, Table 2). A list of all
metadata associated the VERIFY data sets will be provided in the next version
of the DMP.
Additionally, the data will be locatable by means of standard identification
mechanisms, such as the Digital Object Identifiers (DOI).
The data file will follow a specific naming convention that will be defined by
the Executive board of VERIFY. Such naming convention will take care of the
version numbers as the objective of VERIFY is to deliver each year updated
products.
## Making data openly accessible
Overall, the data sharing procedures will be the same across the datasets and
will be in accordance with the Grant Agreement.
Most mature data products of VERIFY will be made publicly available to
maximize the uptake by the climate scientific community, the inventory
community and all stakeholders. At minimum all data will be available to the
consortium during the project and most of them will be publicly available.
Restricted data will be following contractual reasons, especially to protect
the publication of the dataset by the data producer during the project
duration. The reasons for restricted data access will be provided for each
data set.
At this stage the structure of the database is not completely selected, and
offers two possibilities:
i) creating a project database hosted by the coordinator, or ii) using the
cloud through EUDAT infrastructure to store and manage all data sets.
Depending on the solution different tools/software will be use to search for,
get access and visualize the different data sets.
In the case of an internal solution, it is foreseen to make use of a specific
software, MongoDB, to search through the metadata catalogue of the internal
database (see _https://en.wikipedia.org/wiki/MongoDB_ ) .
## Making data interoperable
We will seek to make all mature and publicly available data set of VERIFY
interoperable. We will therefore follow the standards for data format and
metadata. We will use Netcdf4 format and the CF convention (Climate and
Forecast convention: _http://cfconventions.org/_ ) , as default.
We will make the database and data visualization facilities compatible with
the GEO portal (http://www.geoportal.org/)
## Increase data re-use
Following the Questionnaire we will associate to each data set a possible
embargo before the data become publicly available (to give time to data
producer to publish the data set). However, we seek to have all mature data of
VERIFY publicly available well before the end of the project.
At the end of the project the database (if based on an internal data
repository) will be transferred to a more perennial infrastructure, the ICOS
Carbon Portal and/or the Climate Data Store of the Copernicus program. The
decision will be made during the course of VERIFY with the two partners in
charge of these infrastructures (ICOS and ECMWF). This will ensure that the
data remains reusable for a long period of time (decades at least).
For a proper re-use of the data, a description of the method behind each data
set will be provided as well as a description of all quality checks that were
done.
# General Data Management issues
## Allocation of resources
The cost of making the data FAIR will be shared across the data providers
(each partner of VERIFY) and the coordinator who is managing the database
(data repository, data saving and archiving, data visualization, etc):
* The data provider will make sure his data are compliant with international standard such as CF compliant Netcdf format, providing the necessary data description and metadata for an efficient use of their data sets.
* The coordinator (CEA-LSCE) will take care of the main FAIR aspects, with the associated cost. It will define and manage the database and data visualization facilities.
The long-term preservation of the data will be discussed internally; all
products of VERIFY are expected to be transferred in one of the main European
data infrastructure relevant to GHG: the ICOS Carbon Portal and/or the
Copernicus data storage facility. The associated cost will be primarily taken
by the coordinator.
## Data security
After the collection of data files, it is important to ensure that the files
are saved under a security directory. This implies that it is crucial to
control the use of the data files within the database and who will have “write
access” to the all data files.
We are currently studying the possibility to use the EUDAT system to store the
data, through the ICOS Carbon Portal infrastructure. The security issues may
thus be dealt with differently, depending on the solution chosen. In the case
of storing the data under a specific VERIFY database, the coordinator (CEA-
LSCE) will ensure that the responsibilities and authorities of a DMP group
members (corresponding to the Executive Board) will be clearly declared and
that all steps during data collection and about data management can be secured
completely. In the case of using the EUDAT system, we will rely on this
European Infrastructure for data security.
## Ethical aspects
The VERIFY partners will comply with the ethical principles as set out in
Article 34 of the Grant Agreement, which states that all activities must be
carried out in compliance with:
* The ethical principles (including the highest standards of research integrity e.g. as set out in the European Code of Conduct for Research Integrity, and including, in particular, avoiding fabrication, falsification, plagiarism or other research misconduct) and Commission recommendation (EC) No 251/2005 of 11 March 2005 on the European Charter for Researchers and on a Code of Conduct for the Recruitment of Researchers (OJ
L 75, 22.03.2005, p. 67), the European Code of Conduct for Research Integrity
of ALLEA (All European Academies) and ESF (European Science Foundation) of
March 2011.
* Applicable international, EU and national law.
Furthermore, activities raising ethical issues will comply with the ‘ethics
requirements’ set out in the deliverable D9.1.
# VERIFY data sets
This section will summarize the different data sets of VERIFY following the
responses to the Questionnaire. Due to the late distribution of the
Questionnaire, it will be completed in the next update of the DMP.
## WP1 data
WP1 will produce mainly text documents that will be available during the
project duration to all
VERIFY members through the internal platform
( _https://projectsworkspace.eu/sites/VERIFY/SitePages/My%20VERIFY.aspx_ ;
with password protection) and to the external community through the public
website
( _http://verify.lsce.ipsl.fr/index.php_ ) .
## WP2-3-4 data
These three work packages will produce the main raw data for the overall
VERIFY objectives, i.e. data-driven (atmospheric, land, ocean based) estimates
of GHG fluxes. These will be mainly gridded fluxes of CO 2 , CH 4 and N 2
O following two major approaches:
* Bottom up approaches: fluxes derived from land/ocean ecosystem models constrained by in situ observations.
* Top-down approaches: fluxes derived from the valorization of atmospheric GHG concentration measurements through atmospheric transport inversion.
We provide below one example of these data sets following first responses to
the Questionnaire and using few additional information from the document of
work. The complete list of all data set description will be available in the
next update of the DMP.
Additionally, the three WP will produce a Community Inversion Framework (CIF)
that will be managed under the GitHub software for traceability and exchange
with all climate research group across Europe.
**Table 3: Example of datasets for CO2 fluxes.**
<table>
<tr>
<th>
</th>
<th>
**CO 2 flux estimates from ORCHIDEE land surface model **
</th> </tr>
<tr>
<td>
**Data set description**
</td>
<td>
European biosphere-atmosphere exchange fluxes of CO 2 from the Vegetation,
including gross fluxes (Photosynthesis and Respiration) from the ORCHIDEE
Model. The ORCHIDEE parameters will be calibrated against CO 2 flux
measurements from the FLUXNET network, satellite NDVI measurement (MODIS) and
globally from atmospheric CO 2 observations.
The data will be available at approx. 5-7 km x 5-7 km horizontal resolution
for the past two decades.
The data will be updated each year of the project.
</td> </tr>
<tr>
<td>
**Standards and metadata**
</td>
<td>
The product will be available in netCDF-4 format compliant with CF
conventions. Metadata information will be included in the global attributes of
the Netcdf file.
</td> </tr>
<tr>
<td>
**Data sharing**
</td>
<td>
The data set will be made publicly available as part of the primary raw data
of VERIFY.
</td> </tr>
<tr>
<td>
**Data visualization**
</td>
<td>
Data visualization (Maps and aggregated time series for pre-defined regions)
will be made available (as in the Carbon Atlas web site)
</td> </tr> </table>
## WP5-6 data
WP5 and WP6 will produce annual total GHG budget (emissions and sinks) for the
anthropogenic and natural (LULUCF) sectors, for all European countries based
on the raw data produced in WP23-4. These correspond to the observational
based Fact-Sheets (described for instance in deliverable D5.6) of GHG budgets
using both bottom-up and top-down estimates from WP2-3-4. WP6 will produce
similar Fact-Sheets comparing observation based and inventory budgets of GHG
per country (for European countries and USA, China and Indonesia). The Fact-
Sheets will be described in the next update of the DMP.
## WP7 data
Like WP1, WP7 will produce mainly text documents and summary/synthetic
presentations that will be available during the project duration to all VERIFY
members through the internal platform (
_https://projectsworkspace.eu/sites/VERIFY/SitePages/My%20VERIFY.aspx_ ; with
password protection) and to the external community through the public website
( _http://verify.lsce.ipsl.fr/index.php_ ) . The major annual documents
(updated each year) will be described in the next DMP update.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1411_CarE-Service_776851.md
|
# INTRODUCTION
This report describes the Data Management Plan (DMP) for the CarE-Service
Project. The report aims at establishing the policy framework for information
management and confidentiality within the CarE-Service consortium, the members
of the Consumers’ and Stakeholders Committees, as well as external
stakeholders. The policy framework for data management includes principles
ensuring the effective management and confidentiality of data, information and
records throughout the related authorities and responsibilities. The main
purpose of the data/information management policy is to protect both
electronic and paper-version of data/information from any unauthorized use and
access with a clear role and responsibilities of those who manage the
data/information, while ensuring the greatest possible access of
data/information, consistent with legislation and relevant EU/consortium
policies. Detailed information will be provided on the procedures that will be
implemented for data collection, storage, protection, retention and
destruction and confirmation that they comply with national (eg.
Bundesdatenschutzgesetz (BDSG) for Germany) and EU legislation (Regulation
(EU) 2016/679 (General Data Protection Regulation)).
The DMP describes the data management life cycle for the data to be collected,
processed and/or generated by CarE-Service Horizon 2020 project. As part of
making research data findable, accessible, interoperable and re-usable (FAIR),
the DMP includes information on:
* the handling of research data during & after the end of the project
* what data will be collected, processed and/or generated
* which methodology & standards will be applied
* whether data will be shared/made open access and
* how data will be curated & preserved (including after the end of the project).
**BENEFICIARIES’ REPRESENTATIVES FOR THE DATA MANAGEMENT OF THE PROJECT.**
<table>
<tr>
<th>
**Full name**
</th>
<th>
**Short name**
</th>
<th>
**Contact person – DM Representatives**
</th> </tr>
<tr>
<td>
CONSIGLIO NAZIONALE DELLE RICERCH
</td>
<td>
CNR
</td>
<td>
Via A. Corti 12, Milano, 20133, Italy
Giacomo Copani: [email protected]_
Marcello Colledani: [email protected]_
Sarah Behnam: [email protected]_
</td> </tr>
<tr>
<td>
LINKOPINGS UNIVERSITET
</td>
<td>
LIU
</td>
<td>
Campus Valla, Linköping, 58183, Sweden
Erik Sundin: [email protected]_
Christian Kowalkowski: [email protected]_
Brenda Nansubuga: [email protected]_
</td> </tr>
<tr>
<td>
ENVIROBAT ESPANA SL
</td>
<td>
ENV
</td>
<td>
Avda. Lyon, 10, Azuqueca de Henares, 19200, Spain
Juan Manuel Pérez: [email protected]_ Celia Rosillo:
[email protected]_
</td> </tr>
<tr>
<td>
PRODIGENTIA - TECNOLOGIAS DE INFORMACAO SA
</td>
<td>
PROD
</td>
<td>
Rua Miguel Torga, Edifício Espaço Alfragide, 2C,
Alfragide, 2610-086, Portugal
Luis Domingues: [email protected]_
Pedro Farinha: [email protected]_
</td> </tr>
<tr>
<td>
AGENCIA ESTATAL CONSEJO
SUPERIOR DEINVESTIGACIONES CIENTIFICAS
</td>
<td>
CSIC
</td>
<td>
Calle Serrano 117, Madrid, 28006, Spain
Félix A. López: [email protected]_
Olga Rodriguez: [email protected]_
</td> </tr>
<tr>
<td>
CIRCULAR ECONOMY SOLUTIONS GMBH
</td>
<td>
CECO
</td>
<td>
Greschbachstr. 3, 76229, Karlsruhe, Germany Markus Wagner:
[email protected]_ Konstantinos Georgopoulos:
[email protected]_ Dominik Kuntz: [email protected]_
[email protected]_
</td> </tr>
<tr>
<td>
COBAT, CONSORZIO NAZIONALE RACCOLTAE RICICLO
</td>
<td>
COBAT
</td>
<td>
Via Vicenza, 29, Rome, 00185, Italy Luigi De Rocchi: [email protected]_
</td> </tr>
<tr>
<td>
FIAT CHRYSLER AUTOMOBILES ITALY SPA
</td>
<td>
FCA
</td>
<td>
Corso Giovanni Agnelli 200, Torino, 10135, Italy
Alessandro Levizzari: [email protected]_
Erica Anselmino: [email protected]_
Francesco Bonino: [email protected]_
</td> </tr>
<tr>
<td>
RADICI NOVACIPS SPA
</td>
<td>
RAD
</td>
<td>
Via Bedeschi 20, Chignolo D'Isola, 24040, Italy
Erico Spini: [email protected]_
Riccardo Galeazzi: [email protected]_
Carlo Grassini: [email protected]_
</td> </tr>
<tr>
<td>
IMA MATERIALFORSCHUNG UND ANWENDUNGSTECHNIK GMBH
</td>
<td>
IMA
</td>
<td>
Wilhelmine Reichard Ring 4, Dresden, 01109, Germany
Silvio Nebel: [email protected]_
Jens Ridzewski: [email protected]_
Jens Hornschuh: [email protected]_
</td> </tr>
<tr>
<td>
FRAUNHOFER GESELLSCHAFT ZUR
FOERDERUNG DER
ANGEWANDTEN FORSCHUNG E.V.
</td>
<td>
Fraunhofer
</td>
<td>
Reichenhainer Straße 88, Chemnitz, 09126, Germany
Katja Haferburg: [email protected]_
Thomas Hipke: [email protected]_
</td> </tr>
<tr>
<td>
AVICENNE DEVELOPPEMENT
</td>
<td>
AVIC
</td>
<td>
42 rue de la Grande Armée, 75017, Paris, France
Christophe Pillot: [email protected]_
</td> </tr>
<tr>
<td>
CIA AUTOMATION AND ROBOTICS SRL
</td>
<td>
CIA
</td>
<td>
Via San Carlo 16, Albiate, 20847, Italy
Enrico Novara: [email protected]_
Angelo Galimberti: [email protected]_
Matteo Giussani: [email protected]_
</td> </tr>
<tr>
<td>
E-VAI SRL
</td>
<td>
EVAI
</td>
<td>
Piazza Cadorna, 14, Milano, 20100, Italy
Alessandra Melchioni: [email protected]_ Mauro Buzzi Reschini:
[email protected]_
</td> </tr> </table>
<table>
<tr>
<th>
JRC -JOINT RESEARCH
CENTREEUROPEAN COMMISSION
</th>
<th>
JRC
</th>
<th>
Rue de la Loi 200, Brussels, 1049, Belgium
Fabrice Mathieux: [email protected]_
Fulvio Ardente: [email protected]_
Paolo Tecchio: [email protected]_
</th> </tr> </table>
# DATA POLICY \- GENERAL AND PERSONAL DATA PROTECTION
During the project data will be collected from or generated by several
sources. For research purposes, CarE-Service project may contain and process
personal data from various sources (e.g. interviews, questionnaires, online
ICT platforms and websites, etc.). Additionally, personal data of the project
participants may be collected during meetings, workshops, on social media
platforms etc. in the form of photos, videos, names, etc., for research,
promotion or other purposes of the project. “Personal data” means information
relating to an identified or identifiable natural person.
An identifiable natural person is one who can be identified, directly or
indirectly, in particular by reference to an identifier such as a name, an
identification number, location data, an online identifier or to one or more
factors specific to the physical, physiological, genetic, mental, economic,
cultural or social identity of that natural person, Article 4 EU General Data
Protection Regulation (GDPR).
Examples: name, address, identification number, pseudonym, occupation, e-mail,
location data, Internet Protocol (IP) address, cookie ID, phone number, data
provided by smart meters, etc.
Individuals are not considered “identifiable” if identifying them requires
excessive effort.
Completely anonymized data does not fall under the data privacy rules (from
the moment it has been completely anonymized).
“Processing of personal data” means any operation (or set of operations)
performed on personal data, either manually or by automatic means. This
includes:
* collection (digital audio recording, digital video caption, etc.)
* recording
* organization, structuring & storage (cloud, LAN or WAN servers)
* adaptation or alteration (merging sets, amplification, etc.)
* retrieval & consultation
* use
* disclosure by transmission, dissemination or otherwise making available (share, exchange, transfer)
* alignment or combination
* restriction, erasure or destruction.
Examples: access to/consultation of a database containing personal data;
managing of the database; posting/putting a photo of a person on a website;
storing IP addresses or MAC addresses; video recording (CCTV); creating a
mailing list or a list of participants.
Processing normally covers any action that uses data for research purposes
(even if interviewees, human volunteers, patients, etc. are not actively
included in the research).
Personal data may come from any type of research activity (ICT research),
personal records (financial, criminal, education, etc.), lifestyle and health
information, physical characteristics, gender and ethnic background, location
tracking and domicile information, etc.
All research processes and all the project activities (meetings, publications,
etc.) for the CarEService project must comply with ethical principles and all
the applicable international, EU and national law (in particular the GDPR,
national data protection laws and other relevant legislation).
## PRE-SCREENING
For the first stage, there is a pre-screening process of personal data. The
one who collects the data must be able to answer the following questions and
to act properly with the following instructions.
1. Does your research involve processing of personal data? If yes, in that case the partner must provide detailed information such as:
* Details of the technical and organizational measures to safeguard the rights of the research participants.
* Details of the informed consent procedures.
* Details of the security measures to prevent unauthorized access to personal data.
* Details of “data minimization principle” (processing only relevant data and limiting the processing to the purpose of the project).
* Details of the anonymization /pseudonymization techniques.
* Justification in case research data will not be anonymized/ pseudonymized (if relevant).
* Details of the data transfers (type of data transferred and country to which it is transferred – for both EU and non-EU countries).
Additionally, documents should be kept on file, such as Informed Consent Forms
and Information Sheets used (if relevant) (D10.1).
In case that the research involves processing of personal data, the partner
should be able to reply to the following question(s) and to act properly with
the following instructions.
2. Does it involve the processing of special categories of personal data (e.g. genetic, health, sexual lifestyle, ethnicity, political opinion, religious or philosophical conviction)?
In that case the partner must provide detailed information such as:
* Justification for the processing of special categories of personal data.
* Justification in case research objects will not be anonymized/ pseudonymized (if applicable)
3. Does it involve processing of genetic, biometric or health data?
A declaration confirming compliance with the laws of the country where the
data was collected should be kept on file.
4. Does it involve profiling, systematic monitoring of individuals or processing of large scale of special categories of data, intrusive methods of data processing (such as tracking, surveillance, audio and video recording, geolocation tracking etc.) or any other data processing operation that may result in high risk to the rights and freedoms of the research participants?
In that case the partner must provide detailed information such as:
* Details of the methods used for tracking, surveillance or observation of participants.
* Details of the methods used for profiling.
* Risk assessment for the data processing activities.
* Details of safeguarding the rights of the research participants.
* Details on the procedures for informing the research participants about profiling, and its possible consequences and the protection measures.
Data protection impact assessment (art. 35 GDPR) must be provided.
5. Does your research involve further processing of previously collected personal data (including use of preexisting data sets or sources, merging existing data sets)?
In that case the partner must provide detailed information such as:
* Details of the database used or of the source of the data.
* Details of the data processing operations.
* Details of safeguarding the rights of the research participants.
* Details of “data minimization principle” (processing only relevant data and limiting the processing to the purpose of the project).
* Justification in case research objects will not be anonymized/ pseudonymized (if applicable)
Additionally, documents should be kept on file, such as 1) Declaration
confirming lawful basis for the data processing, 2) Permission by the
owner/manager of the data sets (e.g. social media databases) (if applicable),
3) Informed Consent Forms + Information Sheets + other consent documents (opt
in processes, etc.) (if applicable).
6. Does your research involve publicly available data?
In that case the partner must confirm that the data used in the project is
publicly available and can be freely used for the project.
Additionally, documents should be kept on file, such as permission by the
owner/manager of the data sets (e.g. social media databases) (if applicable).
7. Is it planned to export personal data from the EU to non-EU countries? Specify the type of personal data and countries involved.
In that case the partner must provide detailed information such as the details
of the types of personal data to be exported. Also, the details of
safeguarding the rights of the research participants must be described.
Additionally, documents should be kept on file, such as a declaration of
confirming compliance with Chapter V of the GDPR.
8. Is it planned to import personal data from non-EU countries into the EU? Specify the type of personal data and countries involved.
In that case the partner must provide detailed information such as the details
of the types of personal data to be imported.
Additionally, documents should be kept on file, such as a declaration of
confirming compliance with the laws of the country in which the data was
collected.
## RISK ANALYSIS FOR ALL DATA PROCESSING ACTIVITIES
All single processing activities of personal data must be listed in overview
(independent from criticality of the personal data). The list must be created,
maintained and updated by the data controller.
The overview will be the basis for the risk analysis.
For all processing activities of personal data the following information is
required:
* Processing activity
* Description of the processing activity
* Data Types and contained data (e.g. personal data, financial data, working data, …)
* Legal basis for the processing activity
* Responsible of the processing activity (Person, Position, Company)
* Purpose of the processing activity
* Affected parties of the processing activity
* Criticality of the processing activity (in terms of confidentiality, availability and integrity)
* Deletion period of the processing activity
On each processing activity the risk analysis must be applied.
Following details need to be described and evaluated:
* Risk description (e.g. unauthorized access to personal data)
* Potential harm for affected party
* Description of technical and organizational measures
* Classification of risk in dependency on probability and severity of risk (e.g. low, medium, high, …)
* Risk handling (e.g. accept risk, moderate or eliminate risk probability by implementing additional measures, …)
* Re-classification of risk in dependency on probability and severity of risk (e.g. low, medium, high, …) in case additional measures will be implemented
## ANONYMIZATION AND PSEUDONYMIZATION
Under these rules, all the collected personal data must be processed in
accordance with certain principles and conditions that aim to limit the
negative impact on the persons concerned and ensure fairness, transparency and
accountability of the data processing, data quality and confidentiality. As
described in Deliverable 10.1, CarE-Service does not intend to collect, store
and/or process sensitive data (health, genetic or biometric data). CarE-
Service may collect, store and monitor both “anonymous” and “non-anonymous”
non-sensitive personal data.
This implies the following main obligations for all the participants:
* Data processing should be subject to appropriate safeguards.
* Data should, wherever possible, be processed in anonymized or pseudonymized form (D10.1, Chapter 2).
* Data processing is subject to free and fully informed consent of the persons concerned (unless already covered by another legal basis, e.g. legitimate or public interest).
* Data processing must NOT be performed in secret and research participants must be made aware that they take part in a research project and be informed of their rights and the potential risks that the data processing may bring.
* Data may be processed ONLY if it is really adequate, relevant and limited to what is necessary for the research (‘data minimization principle’).
Collection of personal data (e.g. on religion, sexual orientation, race,
ethnicity, etc.) is not essential and relevant with the scope of the research
and the project, therefore the collection of sensitive personal data is not
permitted.
It is recommended all participants or partners use anonymized or pseudonymized
data for project purposes.
“Anonymized” means that the data has been rendered anonymous in such a way
that the data subject can no longer be identified (and therefore is no longer
personal data and thus outside the scope of data protection laws).
_Figure 1: Anonymization (Source: www.wso2.com, 2018)_
Anonymization also helps in making research data and project results FAIR,
without providing personal data (Figure 2) (FAIR: see Chapter 6).
_Figure 2: Anonymization of personal data (Source: OpenAIRE, 2017)_
“Pseudonymized” means to divide the data from its direct identifiers so that
linkage to a person is only possible with additional information that is held
separately. The additional information must be kept separately and securely
from processed data to ensure non-attribution.
_Figure 3: Pseudonymization, (Source: www.wso2.com, 2018)_
It is recommended to involve the data protection officer (DPO) in all stages
of the project whenever data privacy issues arise.
Even if all privacy-related issues are addressed, research data may still
raise other ethics issues, such as the potential misuse of the research
methodology/ findings or ethics harms to specific groups.
# DATA SUMMARY
The main purpose for the data collection/generation of the CarE-Service
project is for the description of new circular economy business models in
innovative hybrid and electric mobility through advanced reuse and
remanufacturing technologies and services.
The CarE-Service project will produce several datasets during the lifetime of
the project. All the data which will be collected will be relevant to the
purposes of the projects, such as the establishment of circular economy
business models, the development of the Smart Mobile Modules, the creation of
customer-driven products and the development and validation of technical
solutions for reused, remanufactured and recycled components and the
evaluation of these business models through demonstration and life cycle
assessment (LCA).
All the collected or generated data will be analyzed and evaluated from a
range of methodological perspectives for project development and engineering
and scientific purposes.
A range of data will be created during the project. These will be available in
a variety of easily accessible formats, including Documents (Word) (DOCX),
Spreadsheets (Excel) (XLSX, CSV), Presentation files (Power Point) (PPT),
PostScript (PDF, XPS), images, audio and video files (JPEG, PNG, GIF, TIFF,
WAV, MPEG, AIFF, OGG, AVI, MP4), Technical CAD drawings (DWG), Origin (OPJ),
compressed formats (TAR.GZ, MTZ), Program database (PDB, DBS, MDF, NDF), etc.
(see Table 5-1).
As no comparable data are available for secondary analysis at the moment, it
is planned to make our dataset publicly available in a research data
repository. Apart from the research team, the dataset will be useful for other
research groups working on eco-innovative circular economy business models on
large scale demonstration projects.
The following table contains all the datasets that will be generated during
the project. The expected size of the datasets produced will be between 5MB
and 1GB.
For every dataset which will be generated for a task, the leading partner of
the task will be the Master of Data. The Master of Data will be responsible
for the collection of the data from the other partners, the file and sharing
actions among the consortium, the creation of the linked metadata files and
also the activities for the publish of the data, e.g. on Zenodo platform.
_Table 5-1: Potential Datasets_
<table>
<tr>
<th>
**Potential datasets – Description**
</th>
<th>
**Format**
</th>
<th>
**Dissemination level**
</th>
<th>
**Master of Data**
</th> </tr>
<tr>
<td>
**WP1 - Requirements for new business models, services, demonstrators and
KPIs**
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
1
</td>
<td>
Task 1.1.a
</td>
<td>
Survey for deriving detailed information about mobility usage and needs,
limitations of current E&HEVs offerings, acceptance criteria, etc., from
various groups of customers in Europe.
</td>
<td>
DOCX, XLSX, CSV, PDF
</td>
<td>
Public
</td>
<td>
LIU
</td> </tr>
<tr>
<td>
2
</td>
<td>
Task 1.1.b
</td>
<td>
Case study interviews of the main players of the future re-use chains, both in
the automotive and nonautomotive sectors for the creation of KPIs on the new
industrial circular business models
</td>
<td>
DOCX, XLSX, PDF
</td>
<td>
Public
</td> </tr>
<tr>
<td>
3
</td>
<td>
Task 1.2.a
</td>
<td>
Technical requirements and KPIs for new re-use processes and technologies for
batteries in re-use value chains
</td>
<td>
DOCX, XLSX, PDF
</td>
<td>
Confidential
</td>
<td>
CNR
</td> </tr>
<tr>
<td>
4
</td>
<td>
Task 1.2.b
</td>
<td>
Technical requirements and KPIs for new re-use processes and technologies for
metals re-use value chains
</td>
<td>
DOCX, XLSX, PDF
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
5
</td>
<td>
Task 1.2.c
</td>
<td>
Technical requirements and KPIs for new re-use processes and technologies for
polymers re-use value chains
</td>
<td>
DOCX, XLSX, PDF
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
6
</td>
<td>
Task 1.2.d
</td>
<td>
Technical requirements and KPIs for new re-use processes and technologies for
batteries in re-use value chains
</td>
<td>
DOCX, XLSX, PDF
</td>
<td>
Public
</td> </tr>
<tr>
<td>
7
</td>
<td>
Task 1.2.e
</td>
<td>
Technical requirements and KPIs for new re-use processes and technologies for
metals re-use value chains
</td>
<td>
DOCX, XLSX, PDF
</td>
<td>
Public
</td> </tr>
<tr>
<td>
8
</td>
<td>
Task 1.2.f
</td>
<td>
Technical requirements and KPIs for new re-use processes and technologies for
polymers re-use value chains
</td>
<td>
DOCX, XLSX, PDF
</td>
<td>
Public
</td> </tr>
<tr>
<td>
9
</td>
<td>
Task 1.3
</td>
<td>
Questionnaire and interview regarding the technical requirements of the ICT
Platform and information associated to reusable E&HEVs products.
</td>
<td>
DOCX, XLSX,
CSV, PDF, PPT
</td>
<td>
Confidential
</td>
<td>
PROD
</td> </tr>
<tr>
<td>
10
</td>
<td>
Task 1.4
</td>
<td>
Requirements for generalization of the approach to EU industry
</td>
<td>
DOCX, XLSX, PDF
</td>
<td>
Public
</td>
<td>
FCA
</td> </tr>
<tr>
<td>
**WP2 - New circular economy business models and service engineering**
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
11
</td>
<td>
Task 2.1
</td>
<td>
Customer’s requirements and mobility needs considered on a geographic
perspective
</td>
<td>
DOCX, XLSX, PDF
</td>
<td>
Confidential
</td>
<td>
CNR
</td> </tr>
<tr>
<td>
12
</td>
<td>
Task 2.2.a
</td>
<td>
Workshop notes and results
</td>
<td>
DOCX, XLSX, PDF, PPTX
</td>
<td>
Confidential
</td>
<td>
LIU
</td> </tr>
<tr>
<td>
13
</td>
<td>
Task 2.2.b
</td>
<td>
Interviews of the SG for external complementary knowledge
</td>
<td>
DOCX, XLSX,
CSV, PDF, PPTX
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
14
</td>
<td>
Task 2.3.a
</td>
<td>
Risk management – interviews to partners and members of the project SG
</td>
<td>
DOCX, XLSX,
CSV, PDF, PPTX
</td>
<td>
Public
</td>
<td>
AVIC
</td> </tr>
<tr>
<td>
15
</td>
<td>
Task 2.3.b
</td>
<td>
Desk analysis to collect quantitative information useful to
describe/prioritize risks and side effects
</td>
<td>
DOCX, XLSX, PDF
</td>
<td>
Public
</td> </tr>
<tr>
<td>
16
</td>
<td>
Task 2.4.a
</td>
<td>
Socio-economic simulation of new services and business models
</td>
<td>
DOCX, XLSX, PDF
</td>
<td>
Confidential
</td>
<td>
CNR
</td> </tr> </table>
<table>
<tr>
<th>
17
</th>
<th>
Task 2.4.b
</th>
<th>
Socio-economic simulation of new services and business models
</th>
<th>
DOCX, XLSX, PDF
</th>
<th>
Public
</th>
<th>
</th> </tr>
<tr>
<td>
**WP3 - Customer-driven products re- design for circular economy**
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
18
</td>
<td>
Task 3.1
</td>
<td>
Big data from the use phase of products can be collected from different
stakeholders and analyzed to understand main uses and attitudes, thus
suggesting design innovations
</td>
<td>
DOCX, XLSX,
PDF, DWG, JPG
</td>
<td>
Public
</td>
<td>
LIU
</td> </tr>
<tr>
<td>
19
</td>
<td>
Task 3.2.a
</td>
<td>
Redesign of batteries of E&HEVs in order to allow their re-use
</td>
<td>
DOCX, XLSX,
PDF, DWG, JPG
</td>
<td>
Confidential
</td>
<td>
FCA
</td> </tr>
<tr>
<td>
20
</td>
<td>
Task 3.2.b
</td>
<td>
Redesign of metals of E&HEVs in order to allow their re-use
</td>
<td>
DOCX, XLSX,
PDF, DWG, JPG
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
21
</td>
<td>
Task 3.2.c
</td>
<td>
Redesign of techno-polymers of E&HEVs in order to allow their re-use
</td>
<td>
DOCX, XLSX,
PDF, DWG, JPG
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
22
</td>
<td>
Task 3.3.a
</td>
<td>
Creation of prototypes for batteries
</td>
<td>
DOCX, XLSX,
PDF, DWG, JPG
</td>
<td>
Confidential
</td>
<td>
Fraunhofer
</td> </tr>
<tr>
<td>
23
</td>
<td>
Task 3.3.b
</td>
<td>
Creation of prototypes for polymeric materials
</td>
<td>
DOCX, XLSX,
PDF, DWG, JPG
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
24
</td>
<td>
Task 3.3.c
</td>
<td>
Creation of virtual mock-up models, in case of parts (i.e. structural metal
components) would require high investment for prototyping.
</td>
<td>
DOCX, XLSX,
PDF, DWG, JPG
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
25
</td>
<td>
Task 3.4
</td>
<td>
Testing and validation of re-designed products concepts
</td>
<td>
DOCX, XLSX,
PDF, DWG, JPG
</td>
<td>
Confidential
</td>
<td>
IMA
</td> </tr>
<tr>
<td>
**WP4 - Engineering and development of the Smart Mobile Modules**
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
26
</td>
<td>
Task 4.1
</td>
<td>
Detailed engineering of processes and technologies performed by Smart Mobile
Modules.
</td>
<td>
DOCX, XLSX, PDF, JPG
</td>
<td>
Confidential
</td>
<td>
CNR
</td> </tr>
<tr>
<td>
27
</td>
<td>
Task 4.2
</td>
<td>
Development, testing and validation of the smart disassembly module.
</td>
<td>
DOCX, XLSX,
PDF,STP,
DWG,JPG
</td>
<td>
Confidential
</td>
<td>
CIA
</td> </tr>
<tr>
<td>
28
</td>
<td>
Task 4.3
</td>
<td>
Performing tests (electrical tests, load tests on components, vibration tests,
visual inspection tests, etc.)
</td>
<td>
DOCX, XLSX, PDF
</td>
<td>
Confidential
</td>
<td>
IMA
</td> </tr>
<tr>
<td>
29
</td>
<td>
Task 4.4
</td>
<td>
Operational business model of the Smart Mobile Modules (SMMs)
</td>
<td>
DOCX, XLSX, PDF
</td>
<td>
Confidential
</td>
<td>
CNR
</td> </tr>
<tr>
<td>
**WP5 - Development and validation of technical solutions for components re-
use, remanufacturing and recycling**
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
30
</td>
<td>
Task 5.1.a
</td>
<td>
Batteries cells re-use: Standard Operational Sheet (SOS)
</td>
<td>
DOCX, XLSX,
</td>
<td>
Confidential
</td>
<td>
CNR
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
PDF
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<th>
31
</th>
<th>
Task 5.1.b
</th>
<th>
Batteries recycling: Suitable processes based on the combination of mechanical
(e.g. shredding and mechanical separation) and chemical (e.g.
hydrometallurgical) technologies will be engineered.
</th>
<th>
DOCX, XLSX, PDF
</th>
<th>
Confidential
</th> </tr>
<tr>
<td>
32
</td>
<td>
Task 5.2
</td>
<td>
Detailed engineering of processes and technologies for metal parts re-use and
remanufacturing solutions
</td>
<td>
DOCX, XLSX, PDF,
</td>
<td>
Confidential
</td>
<td>
Fraunhofer
</td> </tr>
<tr>
<td>
33
</td>
<td>
Task 5.3
</td>
<td>
Detailed engineering of processes and technologies for techno-polymers re-use
and recycling solutions
</td>
<td>
DOCX, XLSX, PDF
</td>
<td>
Confidential
</td>
<td>
CSIC
</td> </tr>
<tr>
<td>
34
</td>
<td>
Task 5.4.a
</td>
<td>
Testing and validation will be performed considering different conditions of
batteries (new, aged, defected, damaged and used); and
</td>
<td>
DOCX, XLSX,
PDF, DWG, JPG
</td>
<td>
Confidential
</td>
<td>
ENV
</td> </tr>
<tr>
<td>
35
</td>
<td>
Task 5.4.b
</td>
<td>
Testing and validation will be performed considering different battery modules
design and type;
</td>
<td>
DOCX, XLSX,
PDF, DWG, JPG
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
36
</td>
<td>
Task 5.4.c
</td>
<td>
Testing and validation will be performed considering different levels of
residual charge and estimated remaining life cycles;
</td>
<td>
DOCX, XLSX,
PDF, DWG, JPG
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
37
</td>
<td>
Task 5.4.d
</td>
<td>
Testing and validation will be performed considering different chemistry.
</td>
<td>
DOCX, XLSX,
PDF, DWG, JPG
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
38
</td>
<td>
Task 5.5
</td>
<td>
Mechanical tests, such as tensile, compression, bending, or torsion tests will
be performed to determine remanufactured parts properties etc. and results.
</td>
<td>
DOCX, XLSX,
PDF, DWG, JPG
</td>
<td>
Confidential
</td>
<td>
Fraunhofer
</td> </tr>
<tr>
<td>
39
</td>
<td>
Task 5.6
</td>
<td>
Mechanical stress testing, aesthetical features evaluation, corrosion/chemical
testing and results.
</td>
<td>
DOCX, XLSX,
PDF, DWG,
STEP, DAF,
DOF,JPG,
TXT, CSV
</td>
<td>
Confidential
</td>
<td>
RAD
</td> </tr>
<tr>
<td>
**WP6 - Development of CarE-Service ICT Platform and logistics**
</td>
<td>
</td> </tr>
<tr>
<td>
40
</td>
<td>
Task 6.1.a
</td>
<td>
Design of CarE-Service logistics
</td>
<td>
DOCX, XLSX,
PDF, DWG
#GIS
</td>
<td>
Confidential
</td>
<td>
COBAT
</td> </tr>
<tr>
<td>
41
</td>
<td>
Task 6.1.b
</td>
<td>
Simulation algorithms for design optimal reverse logistics infrastructure
</td>
<td>
DOCX, XLSX, PDF
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
42
</td>
<td>
Task 6.2
</td>
<td>
Design of the CarE-Service ICT Platform
</td>
<td>
DOCX, XLSX, PDF
</td>
<td>
Confidential
</td>
<td>
PROD
</td> </tr>
<tr>
<td>
43
</td>
<td>
Task 6.3
</td>
<td>
Implementation and testing of the CarE-Service Platform - list of issues and
new needs to be added to the development backlog/ report
</td>
<td>
DOCX, XLSX, PDF
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
44
</td>
<td>
Task 6.4
</td>
<td>
Operational business model of the CarE-Service ICT Platform
</td>
<td>
DOCX, XLSX, PDF
</td>
<td>
Confidential
</td>
<td>
CECO
</td> </tr> </table>
<table>
<tr>
<th>
**WP7 - Demonstration & LCA assessment **
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
45
</td>
<td>
Task 7.1.a
</td>
<td>
Users’ comments and feedback
</td>
<td>
DOCX, XLSX, PDF
</td>
<td>
Public
</td>
<td>
EVAI
</td> </tr>
<tr>
<td>
46
</td>
<td>
Task 7.1.b
</td>
<td>
Realistic data/ Real time data from 50+ customers/users from the limited pilot
experience
</td>
<td>
DOCX, XLSX, PDF
</td>
<td>
Public
</td> </tr>
<tr>
<td>
47
</td>
<td>
Task 7.2.a
</td>
<td>
Demonstration of Smart Mobile Modules and services
</td>
<td>
DOCX, XLSX, PDF, DWG
</td>
<td>
Public
</td>
<td>
CIA
</td> </tr>
<tr>
<td>
48
</td>
<td>
Task 7.2.b
</td>
<td>
Realistic data for the virtual demonstration of the ICT platform
</td>
<td>
DOCX, XLSX, PDF, DWG
</td>
<td>
Public
</td> </tr>
<tr>
<td>
49
</td>
<td>
Task 7.3.a
</td>
<td>
Data for the pilot demonstration from “IASOL”
</td>
<td>
DOCX, XLSX, PDF
</td>
<td>
Public
</td>
<td>
ENV
</td> </tr>
<tr>
<td>
50
</td>
<td>
Task 7.3.b
</td>
<td>
Data for the pilot demonstration from “eBIKE75”
</td>
<td>
DOCX, XLSX, PDF
</td>
<td>
Public
</td> </tr>
<tr>
<td>
51
</td>
<td>
Task 7.3.c
</td>
<td>
Data for the pilot demonstration from “FERRO”
</td>
<td>
DOCX, XLSX, PDF
</td>
<td>
Public
</td> </tr>
<tr>
<td>
52
</td>
<td>
Task 7.4.a
</td>
<td>
Demonstration of metal parts re-use and remanufacturing solutions
</td>
<td>
DOCX, XLSX,
PDF, DWG, JPG
</td>
<td>
Public
</td>
<td>
Fraunhofer
</td> </tr>
<tr>
<td>
53
</td>
<td>
Task 7.4.b
</td>
<td>
Evaluation of the business model/ economic assessment of new services
</td>
<td>
DOCX, XLSX,
PDF, DWG, JPG
</td>
<td>
Public
</td> </tr>
<tr>
<td>
54
</td>
<td>
Task 7.5
</td>
<td>
Demonstration of techno-polymers re-use and recycling solutions
</td>
<td>
DOCX, XLSX, PDF, JPEG
</td>
<td>
Public
</td>
<td>
RAD
</td> </tr>
<tr>
<td>
55
</td>
<td>
Task 7.6
</td>
<td>
Ex-ante, intermediate and ex-post quantitative analysis of the demonstration
scenarios based on the KPIs identified in T1.3.
</td>
<td>
DOCX, XLSX, PDF
</td>
<td>
Public
</td>
<td>
CNR
</td> </tr>
<tr>
<td>
56
</td>
<td>
Task 7.7.a
</td>
<td>
Life Cycle Assessment (Confidential version)
</td>
<td>
DOCX, XLSX, PDF
</td>
<td>
Confidential
</td>
<td>
FCA
</td> </tr>
<tr>
<td>
57
</td>
<td>
Task 7.7.b
</td>
<td>
Life Cycle Assessment (Public version)
</td>
<td>
DOCX, XLSX, PDF
</td>
<td>
Public
</td>
<td>
FCA
</td> </tr>
<tr>
<td>
**WP8 - Dissemination & Exploitation **
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
58
</td>
<td>
Task 8.1
</td>
<td>
Dissemination and communication plan (First version: V1.0, Second version:
V2.0, etc.)
</td>
<td>
DOCX, PDF
</td>
<td>
Confidential
</td>
<td>
CSIC
</td> </tr>
<tr>
<td>
59
</td>
<td>
Task 8.2
</td>
<td>
Exploitation planning and implementation (First version: V1.0, Second version:
V2.0, etc.)
</td>
<td>
DOCX, PDF
</td>
<td>
Confidential
</td>
<td>
FCA
</td> </tr>
<tr>
<td>
60
</td>
<td>
Task 8.3
</td>
<td>
Addressing drawbacks and market acceptance
</td>
<td>
DOCX, PDF
</td>
<td>
Public
</td>
<td>
AVIC
</td> </tr>
<tr>
<td>
61
</td>
<td>
Task 8.4
</td>
<td>
Standardization and legislation plan and actions (First version: V1.0, Second
version: V2.0)
</td>
<td>
DOCX, PDF
</td>
<td>
Confidential
</td>
<td>
COBAT
</td> </tr>
<tr>
<td>
62
</td>
<td>
Position paper for standardization and legislation (First version: V1.0,
Second version: V2.0)
</td>
<td>
DOCX, PDF
</td>
<td>
Public
</td> </tr>
<tr>
<td>
63
</td>
<td>
Task 8.5
</td>
<td>
IPR management (First version: V1.0, Second version: V2.0)
</td>
<td>
DOCX, PDF
</td>
<td>
Confidential
</td>
<td>
CSIC
</td> </tr>
<tr>
<td>
64
</td>
<td>
Task 8.6
</td>
<td>
Plan and actions to exploit structural funds (First version: V1.0, Second
version: V2.0)
</td>
<td>
DOCX, PDF
</td>
<td>
Confidential
</td>
<td>
CNR
</td> </tr>
<tr>
<td>
65
</td>
<td>
Task 8.7.a
</td>
<td>
On-line material will be prepared and uploaded on the CarE-Service website, to
explain the developed solutions, services and business models.
</td>
<td>
PDF, PPTX
</td>
<td>
Public
</td>
<td>
CSIC
</td> </tr> </table>
<table>
<tr>
<th>
66
</th>
<th>
Task 8.7.b
</th>
<th>
Material for training webinars and workshops
</th>
<th>
PDF, PPTX
</th>
<th>
Public
</th>
<th>
</th> </tr>
<tr>
<td>
**WP9 - Project management**
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
67
</td>
<td>
Task 9.1
</td>
<td>
Project Management Handbook
</td>
<td>
DOCX, PDF
</td>
<td>
Confidential
</td>
<td>
CNR
</td> </tr>
<tr>
<td>
68
</td>
<td>
Task 9.2
</td>
<td>
Management Report (First version: V1.0, Second version: V2.0, etc.)
</td>
<td>
DOCX, PDF
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
69
</td>
<td>
Task 9.3
</td>
<td>
Risk Management Plan (First version: V1.0, V1.1, V1.2 Second version: V2.0,
V2.1, V2.2 etc.)
</td>
<td>
DOCX, XLSX, PDF
</td>
<td>
Confidential
</td>
<td>
AVIC
</td> </tr>
<tr>
<td>
70
</td>
<td>
Task 9.4
</td>
<td>
Data Policy and Data Management Plan (First version: V1.0, Second version:
V2.0, etc.)
</td>
<td>
DOCX, PDF
</td>
<td>
Public
</td>
<td>
CECO
</td> </tr>
<tr>
<td>
**WP10 - Ethics requirements**
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
71
</td>
<td>
Task 10.1
</td>
<td>
POPD - H - Requirement No. 2
</td>
<td>
DOCX, PDF
</td>
<td>
Public
</td>
<td>
CNR
</td> </tr> </table>
# FAIR DATA
## MAKING DATA FINDABLE, INCLUDING PROVISIONS FOR METADATA
A DOI will be assigned to datasets for effective and persistent citation when
it is uploaded to the Zenodo repository. This DOI can be used in any relevant
publications to direct readers to the underlying dataset.
Each dataset generated during the project will be recorded in an Excel
spreadsheet with a standard format and allocated a dataset identifier. The
spreadsheet will be hosted at the CarE project’s SharePoint. This dataset
information will be included in the metadata file at the beginning of the
documentation and updated with each version.
CarE – Project naming convention for project datasets will comprise of the
following:
1. The unique identifying number as they are described on Table 5-1.
2. The title of the dataset.
3. For each new version of a dataset it will be allocated with a version number which will be for example start at v1.0.
4. The acronym “CarE” indicating a dataset generated by the CarE-Service Project.
5. A unique identification number linking with the dataset work package and deliverable number followed by the task number.
e.g.: “70.Data_Management_Plan_v1.0_CarE_WP9_D9.4_T9.4.pdf”
_Table 6-1: Datasets field_
<table>
<tr>
<th>
Dataset Identifier
</th>
<th>
The ID allocated using the naming convention outlined in section 5.1
</th> </tr>
<tr>
<td>
Title of Dataset
</td>
<td>
The title of the dataset which should be easily searchable and findable
</td> </tr>
<tr>
<td>
Responsible Partner
</td>
<td>
Lead partners responsible for the creation of the dataset
</td> </tr>
<tr>
<td>
Work Package
</td>
<td>
The associated work package this dataset originates from
</td> </tr>
<tr>
<td>
Dataset Description
</td>
<td>
A brief description of the dataset
</td> </tr>
<tr>
<td>
Dataset Benefit
</td>
<td>
What are the benefits of the dataset
</td> </tr>
<tr>
<td>
Dataset Dissemination
</td>
<td>
Where will the dataset be disseminated
</td> </tr>
<tr>
<td>
Type Format
</td>
<td>
This could be DOC, XLSX, PDF, JPEG, TIFF, PPT etc. (Table 5-1)
</td> </tr>
<tr>
<td>
Expected Size
</td>
<td>
The approximate size of the dataset
</td> </tr>
<tr>
<td>
Source
</td>
<td>
How/why was the dataset generated
</td> </tr>
<tr>
<td>
Repository
</td>
<td>
Expected repository to be submitted (Zenodo)
</td> </tr>
<tr>
<td>
DOI (if known)
</td>
<td>
The DOI can be entered once the dataset has been deposited in the repository
</td> </tr>
<tr>
<td>
Date of Repository Submission
</td>
<td>
The date of submission to the repository can be added once it has been
submitted
</td> </tr>
<tr>
<td>
Keywords
</td>
<td>
The keywords associated with the dataset
</td> </tr>
<tr>
<td>
Version Number
</td>
<td>
To keep track of changes to the datasets
</td> </tr>
<tr>
<td>
Link to metadata file
</td>
<td>
The SharePoint link where the file is saved
</td> </tr> </table>
_Table 6-2: Example for dataset field_
<table>
<tr>
<th>
Dataset Identifier
</th>
<th>
70.Data_Policy_And_Management_Plan_v1.0_CarE_WP9_D9.4_T9.4.pdf
</th> </tr>
<tr>
<td>
Title of Dataset
</td>
<td>
Data Policy and Management Plan
</td> </tr>
<tr>
<td>
Responsible Partner
</td>
<td>
CECO
</td> </tr>
<tr>
<td>
Work Package
</td>
<td>
WP9
</td> </tr>
<tr>
<td>
Deliverable
</td>
<td>
D9.4
</td> </tr>
<tr>
<td>
Dataset Description
</td>
<td>
The Data Policy and Management Plan Policy document
</td> </tr>
<tr>
<td>
Dataset Benefit
</td>
<td>
The report aims at establishing the policy framework for information
management and confidentiality within the CarE-Service consortium, the members
of the Consumers’ and Stakeholders Committees, as well as external
stakeholders
</td> </tr>
<tr>
<td>
Dataset Dissemination
</td>
<td>
</td> </tr>
<tr>
<td>
Type Format
</td>
<td>
PDF
</td> </tr>
<tr>
<td>
Expected Size
</td>
<td>
1MB
</td> </tr>
<tr>
<td>
Source
</td>
<td>
</td> </tr>
<tr>
<td>
Repository
</td>
<td>
Zenodo, www.zenodo.org
</td> </tr>
<tr>
<td>
DOI
</td>
<td>
To be inserted once the dataset is uploaded to the repository
</td> </tr>
<tr>
<td>
Date of Repository Submission
</td>
<td>
To be inserted once the dataset is uploaded to the repository
</td> </tr>
<tr>
<td>
Keywords
</td>
<td>
Data Management Plan, Personal Data, GDPR, Data Policy
</td> </tr>
<tr>
<td>
Version Number
</td>
<td>
V1.0
</td> </tr>
<tr>
<td>
Link to metadata file
</td>
<td>
_https://itiacnr.sharepoint.com/:f:/r/sites/CarE_Service/Documenti%20condivisi/C_
_arE-_
_Service%20SharePoint/WPs,Tasks%20and%20Deliverables/WP9_Project_Man_
_agement/T9.4_C-_
_ECO_Policy_of_Data_and_Information_Management/D9.4_M6_V1?csf=1 &e=Q _ _StgHW_
</td> </tr> </table>
## MAKING DATA OPENLY ACCESSIBLE
Each partner must ensure open access (free of charge online access for any
user) to all peerreviewed scientific publications relating to its results.
Results are owned by the partner that generates them. ‘Results’ means any
(tangible or intangible) output of the action such as data, knowledge or
information — whatever its form or nature, whether it can be protected or not
— that is generated in the action, as well as any rights attached to it,
including intellectual property rights. Apart from the data sets specified
that will be made open (public), other data generated in CarE-Service project
should be kept confidential to avoid jeopardising future exploitation.
All the partners must disseminate its results by disclosing them to the public
by appropriate means, as soon as possible (other than those resulting from
protecting or exploiting the results), including in scientific publications
(in any medium). A partner that intends to disseminate its results must give
to the other partners at least 45 days advance notice, together with
sufficient information on the results it will disseminate (Article 29.1, Grant
Agreement).
The data will be made available to the public in order to improve and maximize
access to and re-use of research data generated by the CarE-Service project.
Therefore, all the generated data should be deposited in the Zenodo depository
platform (a free repository hosted by CERN and available to all), which allows
researchers to deposit both publications and data, in line with Article 29.3
of the Grant Agreement.
On Zenodo, all research outputs from all fields of science are welcome. In the
upload form, the uploader chooses between types of files: publications (book,
book section, conference paper, journal article, patent, preprint, report,
thesis, technical note, working paper, etc.), posters, presentations,
datasets, images (figures, plots, drawings, diagrams, photos), software,
videos/audio and interactive materials such as lessons.
All metadata is stored internally in JSON-format according to a defined JSON
schema. Metadata is exported in several standard formats such as MARCXML,
Dublin Core, and DataCite Metadata Schema (according to the _OpenAIRE
Guidelines_ ) .
Files may be deposited under closed, open, or embargoed access. Files
deposited under closed access are protected against unauthorized access at all
levels. Access to metadata and data files is provided over standard protocols
such as HTTP and OAI-PMH.
Metadata is licensed under CC0, except for email addresses. All metadata is
exported via OAIPMH and can be harvested.
For the CarE-Service project, a Community project page on Zenodo has been
created for the partners where they can link their uploads:
_https://zenodo.org/communities/careserviceproject/?page=1 &size=20 _
## MAKING DATA INTEROPERABLE
The CarE-Service project aims to collect and document the data in a
standardized way to ensure that, the datasets can be understood, interpreted
and shared in isolation alongside accompanying metadata and documentation.
Widespread file formats will be generated to ensure the easy access, exchange
and reuse of the generated data from other researchers, institutions,
organizations, countries, etc.
Metadata are data which describe other data. Metadata files contain
information about the documents you're going to upload. A metadata file will
be created manually in an Excel spreadsheet and linked within each dataset. It
will include the following information:
* Title: free text
* Creator: Last name, first name
* Organization: Acronym of partner’s organization
* Date: DD/MM/YYYY
* Contributor: It can provide information referring to the EU funding and to the CarEService project itself; mainly, the terms "European Union (EU)" and "Horizon 2020", as well as the name of the action, acronym and the grant number
* Subject: Choice of keywords and classifications
* Description: Text explaining the content of the data set and other contextual information needed for the correct interpretation of the data.
* Format: Details of the file format
* Resource Type: data set, image, audio, etc.
* Identifier: DOI
* Link to data repository: Zenodo link
* Links to other publicly accessible locations of the data: e.g. Zenodo Community link, other publication platforms, etc.
* Access rights: closed access, embargoed access, restricted access, open access.
_Figure 4: Metadata excel file_
## INCREASE DATA RE-USE (THROUGH CLARIFYING LICENSES)
The datasets will be made available for reuse through uploads to the Zenodo
community page for the project.
In principle, the data will be stored in Zenodo after the conclusion of the
project without additional cost. All the research data will be of the highest
quality, have long-term validity and will be well documented in order to allow
other researchers the ability access and understand them after 5 years.
If datasets are updated, the partner that possesses the data has the
responsibility to manage the different versions and to make sure that the
latest version is available in the case of publicly available data. Quality
control of the data is the responsibility of the relevant responsible partner
generating the data.
# ALLOCATION OF RESOURCES
There are no costs for making the data from CarE-Service project findable,
accessible, interoperable and reusable (FAIR). The repository platform that
will be used is Zenodo, an interdisciplinary open data repository service
maintained by CERN, which allows researchers to deposit both publications and
data while providing tools to link them without any cost. Any other costs
related to open access to research data in the CarE-Service project are
eligible for reimbursement during the duration of the project under the
conditions defined in the Grant Agreement, in particular Article 6.2 D.3, but
also other articles relevant for the cost category chosen.
Circular Economy Solutions GmbH (CECO) is responsible to deliver the Data
Management Plan for the CarE-Service Project in accordance with Task 9.4 for
deliverable D9.4, and also for the updated deliverables D9.7 and D9.8 which
will be submitted in predetermined periods (project months: 18 and 36).
Consiglio Nazionale delle Ricerche (CNR), the project coordinator, is
appointed as data controller to demonstrate that the data subject has
consented to processing of their personal data in all cases. Regarding the
personal data that may be collected from the platforms of PROD and EVAI, the
data processor/data protection officer in these companies will demonstrate to
the project data controller the consent/assent of the data subjects as part of
their online subscription process.
All the project partners should respect the policies set out the Data
Management Plan. Datasets have to be created, managed and stored appropriately
and in line with European Commission and local legislation. Dataset validation
and registration of metadata and backing up data for sharing through
repositories is the responsibility of the partner that generates the data in
the Working Package.
The datasets in Zenodo will be preserved in line with the European Commission
Data Deposit Policy and complies with the European Commission Open Access
Policy and Research Data Pilot. The data will be preserved indefinitely on the
Zenodo repository. This is currently the lifetime of the host laboratory CERN
which currently has an experimental programme defined for the next 20 years at
least.
# DATA SECURITY
During the project, all the data will be stored on the partner’s storage
devices (internal servers, cloud, etc.). In the following table, there are the
detailed storage information for the partners of the project.
<table>
<tr>
<th>
**Short name**
</th>
<th>
**Data storage methods**
</th> </tr>
<tr>
<td>
CNR
</td>
<td>
At CNR, all data related to the CarE-Service project are stored in the well-
known repositories that can be accessed only with specific credentials and
denies any access from unauthorized sources. In particular, all data are
stored and managed through personal devices (administrated by institutional
credentials), Dropbox and Microsoft SharePoint.
</td> </tr>
<tr>
<td>
LIU
</td>
<td>
At LIU, all data related to the CarE-Service project are carefully and
securely managed and stored on local and cloud repositories that cannot be
accessed by unauthorized internal or external entities.
</td> </tr>
<tr>
<td>
ENV
</td>
<td>
ENV possesses an internal server with a folder named “CarEServiceProject” with
restricted access for the staff designed to the project. Internal server
cannot be accessed out of the facilities.
</td> </tr>
<tr>
<td>
PROD
</td>
<td>
Documentation related to the project is stored in Microsoft Sharepoint Online
and access to the data is controlled and made accessible to Prodigentia
employees on a need-to-know basis. Microsoft provides the highest standards in
data security in the industry, including ISO 27001 certifications.
Data stored in the ICT platform databases are stored i) at PROD office (test
and development systems) in local servers, with restricted access, accessible
only to the team allocated to the CarE-Service project ii) at qualified
datacenters in Europe (test and production systems).
The technology used by PROD to implement the ICT platform follows the best
practices in information security, implementing data encryption for passwords,
single-sign-on, account lockout, among others.
Following the best practices in the industry, PROD contracts regularly
qualified 3rd parties to audit the solutions developed by PROD, running black-
box and white-box tests. Potential security flaws identified are mitigated
ASAP according to its criticality.
</td> </tr>
<tr>
<td>
CSIC
</td>
<td>
All data related to the CarE-Service project are stored in an internal server
named “CarEService” with access only for authorized persons. Peer-reviewed
publications will be also stored in an Open Access Repository “Digital CSIC”,
as well as in Zenodo.
</td> </tr>
<tr>
<td>
CECO
</td>
<td>
In CECO, all the related to the CarE-Service project data are stored on data
repositories which are managed by the IT service of CECO (with no access from
outside) and on Microsoft Office 365 cloud hosted in Germany for sharing
easily the data internally.
</td> </tr>
<tr>
<td>
COBAT
</td>
<td>
At COBAT, all data related to the CarE-Service project are carefully and
securely managed and stored on local repositories that cannot be accessed by
unauthorized internal or external entities.
</td> </tr>
<tr>
<td>
FCA
</td>
<td>
In FCA, all the data related to the CarE-Service project are managed and
stored on local repositories; from 2019 there will also be the possibility to
store them in
</td> </tr>
<tr>
<td>
</td>
<td>
a Google Suite Platform cloud repository that cannot be accessed by
unauthorized internal or external entities.
</td> </tr>
<tr>
<td>
RAD
</td>
<td>
At RAD all data related to Car-E Service European project are managed and
stored on local and GeoSharing Platform residing on the internal servers. Both
local and GeoSharing Platform _are not accessible from the external network_ s
and _only_ the people actively involved in the Car-E work group can manage,
store and up/down-load the data files generated.
</td> </tr>
<tr>
<td>
IMA
</td>
<td>
At IMA all documents and data related to the CarE-Service project are stored
on special secured data repositories which are managed by the IT service of
IMA. In addition project specific disseminated information will be stored on
the specified Zenodo data repository.
</td> </tr>
<tr>
<td>
Fraunhofer
</td>
<td>
In Fraunhofer, the data are stored at the institute’s internal server. The
server is protected by modern firewall and antivirus system managed by
specialist for that. The data are located in an area reserved only for the
project where the access is allowed only to those who work in this project (6
persons).
</td> </tr>
<tr>
<td>
AVIC
</td>
<td>
AVIC possesses an internal server with a folder named “CarE-Service” with
restricted access for the staff designed to the project. Internal server that
cannot be accessed by unauthorized internal or external entities
</td> </tr>
<tr>
<td>
CIA
</td>
<td>
CIA possesses an internal server with a folder for the CarE-service project
with restricted access for the staff designed to the project. Internal server
cannot be accessed out of the facilities.
</td> </tr>
<tr>
<td>
EVAI
</td>
<td>
</td> </tr>
<tr>
<td>
JRC
</td>
<td>
Data, documents and files are stored on the local server that can be only
accessed internally. Possibility for sharing folders is available.
</td> </tr> </table>
In accordance with Article 29, “Dissemination of results – Open access –
Visibility of EU funding”, and with the aim to improve and maximize access to
and re-use of research data generated by the CarE-Service project, data will
be archived and preserved in the Zenodo data sharing repository. The platform
provides the option for open or restricted access to data regarding the
dissemination level.
# ETHICAL ASPECTS
All the CarE-Service project partners must carry out the action in compliance
with ethical principles (including the highest standards of research
integrity) and applicable international, EU and national law. The partners
must respect the highest standards of research integrity - as set out, for
instance, in the European Code of Conduct for Research Integrity (Article 34,
Grant Agreement).
CarE-Service does not intend to collect, store and/or process sensitive data
(health, genetic or biometric data). CarE-Service may collect, store and keep
on track both “anonymously” and “non-anonymously” non-sensitive personal data.
Non-anonymous personal data will be collected only in case this will be needed
to achieve the targets of the project. In this case, with full
freedom/awareness of the data subject will be ensured. For this purpose, a
dedicated space for the consent of data subject on the non-anonymous data
collection is present in the designed consent forms (Deliverable 10.1).
CarE-Service may collect personal data from members of the consortium and
other participants for several reasons as questionnaires or surveys, meetings,
conferences etc., In all cases the partners must follow the standard procedure
of getting authorization from the members to show and distribute the results,
the videos the images etc. to the public.
In CarE-Service, the data collection as well as processing will be fully based
on consent/assent of the data subject. Regarding the general personal data
collection within CarE-Service project, the data subjects’ consent/assent will
be given in the context of a written declaration (by signing the forms). The
templates of the informed consent/assent forms are devised to be fully
compliant with EU General Data Protection Regulation (GDPR).
The informed consent form is written in English and also translated and
available in four more languages (German, Spanish, Italian and French)
(Deliverable 10.1, §3.1 and Appendix A1, B1, C1, D1).
CarE-Service partners must keep confidential any data, documents or other
material (in any form) that is identified as confidential at the time it is
disclosed (“confidential information”).
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1414_ReSOLUTE_777372.md
|
# Introduction
RESOLUTE is a public-private partnership with 13 partners from academia and
industry with an overarching goal: To trigger an escalation in the
appreciation and intensity of research on solute carriers (SLCs) worldwide and
to establish SLCs as a tractable target class for medical research and
development. Through the coupling of an inclusive, “open-access ethos” to the
data, results, techniques and reagents with the highest-possible quality of
research output, RESOLUTE expects to accelerate the pace and increase the
impact of research in the field of SLCs to the global benefit of basic
academic research through to applied research in SMEs and pharmaceutical
companies.
Data management plays a major role in the project by providing a central data
repository for collecting, curating, sharing, integrating, analysing and
publishing data generated across the individual work packages and consortium
members. This data management plan (DMP) describes the infrastructure,
policies and life cycle for research data generated and/or processed
throughout the RESOLUTE project. It is intended to be a living document and
will be updated throughout the project whenever significant changes occur. The
official and public data management plan is updated at mid-term (Deliverable
8.2) and a final data management plan is presented at the end of the project
(Deliverable 8.3).
# Research Data
Virtually every individual task of the RESOLUTE project depends on or produces
data in order to develop reagents, deorphanize SLCs or set up assays. On the
one hand, existing data in the public domain or with permissive licenses is
collected and compiled into the RESOLUTE knowledgebase (Deliverable 8.4). A
majority of data, on the other hand, is newly generated, processed and
analysed throughout the project. This process is supported by the RESOLUTE
database (Deliverable 8.5). Both resources are accessible from the RESOLUTE
web portal at https://re-solute.eu (Deliverable 8.6; see Figure 1). For more
details, please refer to _Data Curation: the RESOLUTE Data- and Knowledgebase_
on page 12.
**Figure 1.** RESOLUTE web portal.
Ultimately, data generated in course of RESOLUTE will be disseminated through
the RESOLUTE knowledgebase, among other means (see also _Data Lifecycle_ on
page 13).
## Data Types
A multitude of types of data is expected to be produced in the course of the
RESOLUTE project, ranging from primary data over scientific publications to
reagents, analysis tools and workflows. The following table describes the
different data types and levels, and their corresponding format, standards and
relations in detail. Please note that this list will be significantly extended
during the course of the project. ▪ **RNA-Seq measurements of cell lines**
<table>
<tr>
<th>
_Data Description_
</th>
<th>
RNA-Seq measurements of various cell lines using NGS
</th> </tr>
<tr>
<td>
_Data Type_
</td>
<td>
Transcription profiling
</td> </tr>
<tr>
<td>
_Data Level_
</td>
<td>
primary (raw)
</td> </tr>
<tr>
<td>
_Data Format_
</td>
<td>
FASTQ (community accepted, open, human readable format)
</td> </tr>
<tr>
<td>
_Data Source_
</td>
<td>
Illumina high-throughput sequencing of isolated RNA
</td> </tr>
<tr>
<td>
_Meta Data_
</td>
<td>
* RNA isolation protocol
* Library preparation protocol
* Instrument type
* Instrument method
* Sample size
* Sample description (source, treatment, timepoint, etc.)
</td> </tr>
<tr>
<td>
_Quality Control_
</td>
<td>
Standardized quality control using FastQC software to assess:
* Per base sequence quality (Phred scores)
* Per base sequence content
* Per sequence GC content
* Per base N content
* Sequence duplication level
* Adapter content
* K-mer content
</td> </tr>
<tr>
<td>
_Data Origin_
</td>
<td>
Task 1.5 (parental cell lines, knockout and overexpression cell lines,
required controls)
Task 7.1 (knockout cell lines of priority SLCs under perturbation conditions)
</td> </tr>
<tr>
<td>
_Data Author_
</td>
<td>
Boehringer (T1.5), Sanofi (T7.1)
</td> </tr>
<tr>
<td>
_Data Relation & Utility _
</td>
<td>
Serves as input to mapping to a reference genome (see _Alignments of RNASeq to
reference genome_ below).
</td> </tr>
<tr>
<td>
_Data Volume_
</td>
<td>
10 TB (expected)
</td> </tr>
<tr>
<td>
_Data Sustainability_
</td>
<td>
Data can be duplicated to following public repositories:
* EBI’s sequence read archive (SRA)
* Gene Expression Omnibus (GEO)
</td> </tr> </table>
## ▪ Alignments of RNA-Seq to reference genome
<table>
<tr>
<th>
_Data Description_
</th>
<th>
Mapping of RNA-Seq to a reference genome
</th> </tr>
<tr>
<td>
_Data Type_
</td>
<td>
Transcription profiling
</td> </tr>
<tr>
<td>
_Data Level_
</td>
<td>
derived (processed)
</td> </tr>
<tr>
<td>
_Data Format_
</td>
<td>
BAM: Binary Alignment Map (community accepted, open, binary format)
</td> </tr>
<tr>
<td>
_Data Source_
</td>
<td>
Processing of raw reads from _RNA-Seq measurements of cell lines_ on page 5
</td> </tr>
<tr>
<td>
_Meta Data_
</td>
<td>
* Read trimming and clipping method
* Used reference gene and transcript model (e.g. ENSEMBL GRCh38, release 94)
* Software version and parameters used for read mapping
</td> </tr>
<tr>
<td>
_Quality Control_
</td>
<td>
Custom analysis scripts to assess:
* Gene body coverage (no 3’ bias)
* Mapping rate
* Unique mapping rate
* Gene model mapping rate
* Replicate correlation
</td> </tr>
<tr>
<td>
_Data Origin_
</td>
<td>
Task 1.5 (parental cell lines, knockout and overexpression cell lines,
required controls)
Task 7.1 (knockout cell lines of priority SLCs under perturbation conditions)
</td> </tr>
<tr>
<td>
_Data Author_
</td>
<td>
CeMM
</td> </tr>
<tr>
<td>
_Data Relation & Utility _
</td>
<td>
Serves as input to quantitative gene expression analysis (see _Quantitative
gene expression of cell lines_ below).
</td> </tr>
<tr>
<td>
_Data Volume_
</td>
<td>
10 TB (expected)
</td> </tr>
<tr>
<td>
_Data Sustainability_
</td>
<td>
Because of straightforward reproducibility, data from this intermediate step
in transcription profiling will not be submitted to external repositories.
</td> </tr> </table>
## ▪ Quantitative gene expression of cell lines
<table>
<tr>
<th>
_Data Description_
</th>
<th>
Quantitative gene expression analysis of various cell lines
</th> </tr>
<tr>
<td>
_Data Type_
</td>
<td>
Transcription profiling
</td> </tr>
<tr>
<td>
_Data Level_
</td>
<td>
interpreted (analysed)
</td> </tr>
<tr>
<td>
_Data Format_
</td>
<td>
CSV comma-separated values (community accepted, open, human readable format)
</td> </tr>
<tr>
<td>
_Data Source_
</td>
<td>
Analysis of gene-model mapped reads from _Alignments of RNA-Seq to reference
genome_ on page 5.
</td> </tr>
<tr>
<td>
_Meta Data_
</td>
<td>
* Data normalization procedure
* Software version and parameters used for quantification
* Statistical model used
</td> </tr>
<tr>
<td>
_Quality Control_
</td>
<td>
Custom analysis scripts to assess:
* Clustering on PCA analysis of replicates
* Reproducibility of previously published transcription profiles (if available)
</td> </tr>
<tr>
<td>
_Data Origin_
</td>
<td>
Task 1.5 (parental cell lines, knockout and overexpression cell lines,
required controls)
Task 7.1 (knockout cell lines of priority SLCs under perturbation conditions)
</td> </tr>
<tr>
<td>
_Data Author_
</td>
<td>
CeMM
</td> </tr>
<tr>
<td>
_Data Relation & Utility _
</td>
<td>
* Supports decision process on cell line selection for assay development in Work Package 3
* Input to priority list generation in Task 4.2
* Input to integrative analysis in Task 7.3
</td> </tr>
<tr>
<td>
</td>
<td>
\- Data from Task 1.5 will be published in the RESOLUTE knowledgebase
</td> </tr>
<tr>
<td>
_Data Volume_
</td>
<td>
0.5 TB (expected)
</td> </tr>
<tr>
<td>
_Data Sustainability_
</td>
<td>
Data can be duplicated to following public repositories:
\- EBI’s sequence read archive (SRA)
</td> </tr> </table>
## ▪ SLC-wide viability screen
<table>
<tr>
<th>
_Data Description_
</th>
<th>
Pooled viability screen using a SLC-wide sgRNA library
</th> </tr>
<tr>
<td>
_Data Type_
</td>
<td>
pooled sgRNA profiling
</td> </tr>
<tr>
<td>
_Data Level_
</td>
<td>
primary (raw)
</td> </tr>
<tr>
<td>
_Data Format_
</td>
<td>
BAM (community accepted, open, binary format)
</td> </tr>
<tr>
<td>
_Data Source_
</td>
<td>
Illumina high-throughput sequencing
</td> </tr>
<tr>
<td>
_Meta Data_
</td>
<td>
* Library preparation protocol
* Instrument type
* Instrument method
* Sample size
</td> </tr>
<tr>
<td>
_Quality Control_
</td>
<td>
Standardized quality control using in-house scripts to assess:
\- Per base sequence quality (Phred scores)
</td> </tr>
<tr>
<td>
_Data Origin_
</td>
<td>
Task 2.5 (Genetic interactions)
</td> </tr>
<tr>
<td>
_Data Author_
</td>
<td>
CeMM
</td> </tr>
<tr>
<td>
_Data Relation & Utility _
</td>
<td>
Serves as input to mapping reads to the sgRNA library (see _Quantitative
viability in SLC-wide screen_ below).
</td> </tr>
<tr>
<td>
_Data Volume_
</td>
<td>
2 TB (expected)
</td> </tr>
<tr>
<td>
_Data Sustainability_
</td>
<td>
Data can be duplicated to following public repositories:
* EBI’s sequence read archive (SRA)
* Gene Expression Omnibus (GEO)
</td> </tr> </table>
## ▪ Quantitative viability in SLC-wide screen
<table>
<tr>
<th>
_Data Description_
</th>
<th>
Viability of SLC genes as count tables of mapped sgRNAs
</th> </tr>
<tr>
<td>
_Data Type_
</td>
<td>
pooled sgRNA profiling
</td> </tr>
<tr>
<td>
_Data Level_
</td>
<td>
derived (processed)
</td> </tr>
<tr>
<td>
_Data Format_
</td>
<td>
CSV comma-separated values (community accepted, open, human readable format)
</td> </tr>
<tr>
<td>
_Data Source_
</td>
<td>
Demultiplexing and mapping of reads from _SLC-wide viability screen_ above to
a reference library.
</td> </tr>
<tr>
<td>
_Meta Data_
</td>
<td>
* mapping parameters
* sgRNA library used
* barcodes used
</td> </tr>
<tr>
<td>
_Quality Control_
</td>
<td>
Standardized quality control using in-house scripts to assess:
* Mapping rate of barcodes
* Mapping rate of guides
</td> </tr>
<tr>
<td>
_Data Origin_
</td>
<td>
Task 2.5 (Genetic interactions)
</td> </tr>
<tr>
<td>
_Data Author_
</td>
<td>
CeMM
</td> </tr>
<tr>
<td>
_Data Relation & Utility _
</td>
<td>
Serves as input to genetic interaction analysis (see _Genetic interaction
analysis_ below).
</td> </tr>
<tr>
<td>
_Data Volume_
</td>
<td>
0.1 TB (expected)
</td> </tr>
<tr>
<td>
_Data Sustainability_
</td>
<td>
Because of straightforward reproducibility, data from this intermediate step
in sgRNA profiling will not be submitted to external repositories.
</td> </tr> </table>
## ▪ Genetic interaction analysis
<table>
<tr>
<th>
_Data Description_
</th>
<th>
Statistical model and network for genetic SLC-SLC interactions based on pooled
viability screens in knock out cell lines.
</th> </tr>
<tr>
<td>
_Data Type_
</td>
<td>
pooled sgRNA profiling
</td> </tr>
<tr>
<td>
_Data Level_
</td>
<td>
interpreted (analysed)
</td> </tr>
<tr>
<td>
_Data Format_
</td>
<td>
CSV comma-separated values (community accepted, open, human readable format)
</td> </tr>
<tr>
<td>
_Data Source_
</td>
<td>
Analysis of viability tables from _Quantitative viability in SLC-wide screen_
on page 7.
</td> </tr>
<tr>
<td>
_Meta Data_
</td>
<td>
* Data normalization procedure
* Statistical model used
* Network representation thresholds
</td> </tr>
<tr>
<td>
_Quality Control_
</td>
<td>
Custom analysis scripts to assess:
* Count distributions and outlier detection
* Clustering on PCA analysis of replicates and time points
</td> </tr>
<tr>
<td>
_Data Origin_
</td>
<td>
Task 2.5 (Genetic interactions)
</td> </tr>
<tr>
<td>
_Data Author_
</td>
<td>
CeMM
</td> </tr>
<tr>
<td>
_Data Relation & Utility _
</td>
<td>
* Supports decision process on cell line selection for assay development in Work Package 3
* Input to priority list generation in Task 4.2
* Input to integrative analysis in Task 7.3
* Data from Task 2.5 will be published in the RESOLUTE knowledgebase
</td> </tr>
<tr>
<td>
_Data Volume_
</td>
<td>
0.1 TB (expected)
</td> </tr>
<tr>
<td>
_Data Sustainability_
</td>
<td>
</td> </tr> </table>
## ▪ Interaction proteomics based on mass spectrometry
<table>
<tr>
<th>
_Data Description_
</th>
<th>
Mass spectra of interacting proteins
</th> </tr>
<tr>
<td>
_Data Type_
</td>
<td>
SLC-focused protein-protein interactions
</td> </tr>
<tr>
<td>
_Data Level_
</td>
<td>
primary (raw)
</td> </tr>
<tr>
<td>
_Data Format_
</td>
<td>
mzML (community accepted, open, XML-based format)
</td> </tr>
<tr>
<td>
_Data Source_
</td>
<td>
LC-MS/MS (Thermo Orbitrap Mass Spectrometer)
</td> </tr>
<tr>
<td>
_Meta Data_
</td>
<td>
* Sample preparation protocol
* Instrument type
* Instrument method
* Sample size
</td> </tr>
<tr>
<td>
_Quality Control_
</td>
<td>
Standardized quality control using ProteomeDiscoverer / Skyline to assess:
* Total ion chromatogram
* Visual inspection of the “ion map”
</td> </tr>
<tr>
<td>
_Data Origin_
</td>
<td>
Task 2.6 (AP-MS)
Task 7.2 (BioID, AP-MS)
</td> </tr>
<tr>
<td>
_Data Author_
</td>
<td>
CeMM
</td> </tr>
<tr>
<td>
_Data Relation & Utility _
</td>
<td>
Serves as input to peptide identification and quantification (see _Protein-
protein interaction data_ below).
</td> </tr>
<tr>
<td>
_Data Volume_
</td>
<td>
5 TB (expected)
</td> </tr>
<tr>
<td>
_Data Sustainability_
</td>
<td>
Data can be duplicated to following public repositories:
\- ProteomeXchange
</td> </tr> </table>
## ▪ Protein-protein interaction data
<table>
<tr>
<th>
_Data Description_
</th>
<th>
Peptides and proteins identified and quantified
</th> </tr>
<tr>
<td>
_Data Type_
</td>
<td>
SLC-focused protein-protein interactions
</td> </tr>
<tr>
<td>
_Data Level_
</td>
<td>
derived (processed)
</td> </tr>
<tr>
<td>
_Data Format_
</td>
<td>
mzID (community accepted, open, XML-based format)
</td> </tr>
<tr>
<td>
_Data Source_
</td>
<td>
Searching MS/MS spectra (from _Interaction proteomics based on mass
spectrometry_ on page 8) in a reference database and quantifying signal
intensity.
</td> </tr>
<tr>
<td>
_Meta Data_
</td>
<td>
* reference protein database
* search parameters
</td> </tr>
<tr>
<td>
_Quality Control_
</td>
<td>
Standardized quality control using in-house scripts to assess:
* amount of contamination
* success rate of peptide-spectrum-matching
* score distributions in a target-decoy approach
</td> </tr>
<tr>
<td>
_Data Origin_
</td>
<td>
Task 2.6 (AP-MS)
Task 7.2 (BioID, AP-MS)
</td> </tr>
<tr>
<td>
_Data Author_
</td>
<td>
CeMM
</td> </tr>
<tr>
<td>
_Data Relation & Utility _
</td>
<td>
Serves as input to interaction network analysis (see _Protein interaction
network_ below).
</td> </tr>
<tr>
<td>
_Data Volume_
</td>
<td>
1 TB (expected)
</td> </tr>
<tr>
<td>
_Data Sustainability_
</td>
<td>
Data can be duplicated to following public repositories:
* ProteomeXchange
* PRIDE
</td> </tr> </table>
## ▪ Protein interaction network
<table>
<tr>
<th>
_Data Description_
</th>
<th>
Statistical model and network for SLC focused protein-protein interactions.
</th> </tr>
<tr>
<td>
_Data Type_
</td>
<td>
SLC-focused protein-protein interactions
</td> </tr>
<tr>
<td>
_Data Level_
</td>
<td>
interpreted (analysed)
</td> </tr>
<tr>
<td>
_Data Format_
</td>
<td>
CSV comma-separated values (community accepted, open, human readable format)
</td> </tr>
<tr>
<td>
_Data Source_
</td>
<td>
Integrative analysis of identification and quantification data from
_Proteinprotein interaction data_ on page 9.
</td> </tr>
<tr>
<td>
_Meta Data_
</td>
<td>
* Data normalization procedure
* Negative controls used
* Statistical model used
* Network representation thresholds
</td> </tr>
<tr>
<td>
_Quality Control_
</td>
<td>
Custom analysis scripts to assess:
* Intensity distributions and sample outlier detection
* Clustering on PCA analysis of replicates
* Integration of subcellular localization data
* Integration of CRAPome reference data sets
</td> </tr>
<tr>
<td>
_Data Origin_
</td>
<td>
Task 2.6 (AP-MS)
Task 7.2 (BioID, AP-MS)
</td> </tr>
<tr>
<td>
_Data Author_
</td>
<td>
CeMM
</td> </tr>
<tr>
<td>
_Data Relation & Utility _
</td>
<td>
* Input to priority list generation in Task 4.2
* Input to integrative analysis in Task 7.3
* Data will be published in the RESOLUTE knowledgebase
</td> </tr>
<tr>
<td>
_Data Volume_
</td>
<td>
0.1 TB (expected)
</td> </tr>
<tr>
<td>
_Data Sustainability_
</td>
<td>
Deposition of high-confidence interaction data at:
* BioGRID
* STRING
* IntAct
</td> </tr> </table>
## ▪ Quantitative ionomics
<table>
<tr>
<th>
_Data Description_
</th>
<th>
Quantitative determination of different isotopes
</th> </tr>
<tr>
<td>
_Data Type_
</td>
<td>
Ionomic profiling
</td> </tr>
<tr>
<td>
_Data Level_
</td>
<td>
primary (raw)
</td> </tr>
<tr>
<td>
_Data Format_
</td>
<td>
CSV comma-separated values (community accepted, open, human readable format)
</td> </tr>
<tr>
<td>
_Data Source_
</td>
<td>
ICP-MS analysis of cell lysates
</td> </tr>
<tr>
<td>
_Meta Data_
</td>
<td>
sample name, date of analysis, processing method, tune parameters
</td> </tr>
<tr>
<td>
_Quality Control_
</td>
<td>
calibration curves, CV of replicates
</td> </tr>
<tr>
<td>
_Data Origin_
</td>
<td>
Task 2.8 (Determination of SLCs using ions as (co-)substrates)
</td> </tr>
<tr>
<td>
_Data Author_
</td>
<td>
Vifor (International) Ltd.
</td> </tr>
<tr>
<td>
_Data Relation & Utility _
</td>
<td>
characterization of SLCs on ion activity input to priority list generation in
Task 4.3
</td> </tr>
<tr>
<td>
_Data Volume_
</td>
<td>
5 GB (expected)
</td> </tr>
<tr>
<td>
_Data Sustainability_
</td>
<td>
potential data duplication to public repositories _(to be discussed)_
</td> </tr> </table>
## ▪ Cell-based or cell-free assays using fluorescence readout
<table>
<tr>
<th>
_Data Description_
</th>
<th>
Quantitative SLC transporter assays, such as calcein quenching assay,
fluorescent SLC or ligand binding or internalization assays, fluorescence
polarization assay
</th> </tr>
<tr>
<td>
_Data Type_
</td>
<td>
Spectrophotometric data, high content imaging (HCI)
</td> </tr>
<tr>
<td>
_Data Level_
</td>
<td>
primary (raw) / derived (processed) / interpreted (analysed)
</td> </tr>
<tr>
<td>
_Data Format_
</td>
<td>
Excel, Columbus, TIFF, GraphPad Prism, Microsoft PowerPoint and Word
</td> </tr>
<tr>
<td>
_Data Source_
</td>
<td>
</td> </tr>
<tr>
<td>
_Meta Data_
</td>
<td>
sample name, date of acquisition, filter set, settings, pixel data
</td> </tr>
<tr>
<td>
_Quality Control_
</td>
<td>
Performance of assay confirmed by appropriate controls and reference substrate
/ inhibitors; checked by a second person before release
</td> </tr>
<tr>
<td>
_Data Origin_
</td>
<td>
Task 3.5 (Assays involving fluorescence); Task 6.1 (Assay development)
</td> </tr>
<tr>
<td>
_Data Author_
</td>
<td>
Vifor (International) Ltd.
</td> </tr>
<tr>
<td>
_Data Relation & Utility _
</td>
<td>
Establishing assays to support WP3 and WP6
</td> </tr>
<tr>
<td>
_Data Volume_
</td>
<td>
0.5 TB (if HCI assays performed)
</td> </tr>
<tr>
<td>
_Data Sustainability_
</td>
<td>
These data are not compatible to public repositories. Could be part of
scientific manuscripts or reports.
</td> </tr> </table>
## ▪ Functional assays
<table>
<tr>
<th>
_Data Description_
</th>
<th>
Generation of robust cell-based and cell-free assays for SLCs
</th> </tr>
<tr>
<td>
_Data Type_
</td>
<td>
Functional readout systems
</td> </tr>
<tr>
<td>
_Data Level_
</td>
<td>
interpreted (analysed)
</td> </tr>
<tr>
<td>
_Data Format_
</td>
<td>
Depending on the technology used: Kinetics or end-point values;
substrates/activators dose/response curves
</td> </tr>
<tr>
<td>
_Data Source_
</td>
<td>
Primary data by the new developed assays on parental cell lines, knockout and
overexpression cell lines, purified expressed proteins, with their negative
controls.
Data from public sources (i.e. publications, databases, patents etc.).
</td> </tr>
<tr>
<td>
_Meta Data_
</td>
<td>
* Data normalization procedure
* Software version and parameters used for quantification
* Statistical model used
</td> </tr>
<tr>
<td>
_Quality Control_
</td>
<td>
* Data robustness (i.e. RZ’ factor) and reproducibility inter-experiments and over time
* Reproducibility of eventual previously published data
</td> </tr>
<tr>
<td>
_Data Origin_
</td>
<td>
Work package 6 (Generation of robust cell-based (high-throughput) and/or cell-
free assay systems for all proteins on the SLC priority list)
</td> </tr>
<tr>
<td>
_Data Author_
</td>
<td>
all participants of work package 6
</td> </tr>
<tr>
<td>
_Data Relation & Utility _
</td>
<td>
* Input from WP1: stable cell lines and/or constructs’ generation
* Input from WP2: SLCs potential substrates for orphan SLCs
* Input from WP3: most suitable technology to be used to set a functional assay
* Input from WP4: priority target list to work on
* Input from WP5: available expressed SLC proteins, for cell-free assay set up
</td> </tr>
<tr>
<td>
_Data Volume_
</td>
<td>
0.5 TB (expected)
</td> </tr>
<tr>
<td>
_Data Sustainability_
</td>
<td>
Data can be duplicated to following public repositories, upon consortium
agreement:
\- IUPHAR or NIH databases
</td> </tr> </table>
## ▪ SLC homology models
<table>
<tr>
<th>
_Data Description_
</th>
<th>
Homology modeling
</th> </tr>
<tr>
<td>
_Data Type_
</td>
<td>
3D models
</td> </tr>
<tr>
<td>
_Data Level_
</td>
<td>
derived (processed)
</td> </tr>
<tr>
<td>
_Data Format_
</td>
<td>
atomic coordinates in PDB format (community accepted, open, human readable
format)
</td> </tr>
<tr>
<td>
_Data Source_
</td>
<td>
public repositories (PDB, HGNC)
</td> </tr>
<tr>
<td>
_Meta Data_
</td>
<td>
* Alignment files
* Unrefined homology models
</td> </tr>
<tr>
<td>
_Quality Control_
</td>
<td>
* Accuracy of the template/target alignment
* Structure evaluation programs
* Enrichment calculations to assess the models’ utility for virtual screening
</td> </tr>
<tr>
<td>
_Data Origin_
</td>
<td>
Task 8.5 (Data mining)
</td> </tr>
<tr>
<td>
_Data Author_
</td>
<td>
UniVie
</td> </tr>
<tr>
<td>
_Data Relation & Utility _
</td>
<td>
* Helps understanding the mechanism of transport
* Useful to conduct SAR studies
* Helps to predicts new compounds
</td> </tr>
<tr>
<td>
_Data Volume_
</td>
<td>
2 TB (expected)
</td> </tr>
<tr>
<td>
_Data Sustainability_
</td>
<td>
</td> </tr> </table>
This data management plan is a living document and this section on Data Types
will be updated and extended regularly. Further data types to be defined
comprise e.g. targeted and untargeted metabolomics data, small molecule
binding, functional bioactivity data, high-content screening data, scripts,
algorithms and software tools for data analysis and visualization, QSAR
equations, computational models, and data mining workflows.
**Data Ownership**
Data is inherently owned by the institution of its producer (see also
_Consortium Agreement_ paragraph 7.1.1).
# Data Curation: the RESOLUTE Data- and Knowledgebase
While data physically resides on the CeMM storage server (see _Secure Storage_
on page 17), the RESOLUTE data- and knowledgebase represent the central points
of data and knowledge management, data curation and data access throughout the
RESOLUTE project.
The RESOLUTE database (Deliverable 8.5) provides an interface to _primary_ ,
_derived_ and _interpreted_ data as well as corresponding _metadata_ .
Moreover, it is not limited to data, but is also used to identify and manage
material (e.g. reagents, biological samples, etc.) and events (e.g.
experiments, measurements, shipping, etc.). Thus, it enables linking data to
its origin and allows efficient implementation of data lineage and data
provenance tracking. The RESOLUTE database has a restricted area, accessible
to the participants only, and a public area, which is openly accessible.
The RESOLUTE knowledgebase (Deliverable 8.4) comprises mainly _reference_
data, i.e. data compiled from external databases and resources. It is a
comprehensive and accurate resource of integrated state-of-the-art knowledge
on the various SLC families. Over the course of the project, it will be
expanded by selected _interpreted_ data sets, i.e. analysis results from data
produced in RESOLUTE. The RESOLUTE knowledgebase is largely openly accessible
and contains some extra views accessible to the participants only.
In summary, the RESOLUTE database is experiment and data set centric, while
the RESOLUTE knowledgebase focuses on biological entities. Both tools will
link to each other wherever appropriate.
## Data Lifecycle
The generation of data takes place at the sites of participants and initially
is stored locally. After passing a local quality control and an appraisal of
relevance, data will be uploaded to the RESOLUTE database. This submission
process requires annotation of metadata. Data in the RESOLUTE database is
automatically available to all participants (see _Data Sharing_ on page 14).
To be publicly accessible, data has to go through a data dissemination
procedure (see _Data Dissemination_ on page 14). Selected interpreted data
will be also published in the RESOLUTE knowledgebase.
**Figure 2.** Schema of data lifecycle including the data dissemination
procedure in RESOLUTE.
## Data Identifiers
We developed a RESOLUTE identifier schema following recently published
recommendations for the field of life sciences 1 . The identifiers have to
be short and robust, as they will not only be used for data but also for
reagents and therefore have to be written/read to/from test tubes regularly.
A RESOLUTE identifier starts with two letters, specifying the type of entity
(e.g. CE for cell line). The main part consists of four alphanumeric
characters specifying the actual entity. In the end, an optional checksum
character adds redundancy and allows spotting typos. The main part encodes an
incremental number, using the digits 09 and the letters A-Z with the exception
of D, I, J, L, O, Q, U and Y because of their similarity to other digits or
letters. This results in an alphabet of 28 symbols and thus a total of up to
614,656 words using four characters. The checksum character is taken from an
extended alphabet (including Q, U and Y) and is calculated as modulo 31 (i.e.
the next prime number). The identifier is not case sensitive.
This RESOLUTE identifier schema applies to all entries of the RESOLUTE
database, and is therefore not limited to data, but also used to identify and
manage material (e.g. reagents, biological samples, etc.) and events (e.g.
experiments, shipping, etc.).
## Data Sharing
All data generated or compiled is openly shared among all participants (see
_Consortium Agreement_ paragraph 8.1.2). Data is shared either via the
RESOLUTE SharePoint (for documents, reports, etc.) or via the RESOLUTE
database (for primary, derived and interpreted data).
For sharing data with the scientific community, please refer to _Data
Dissemination_ below.
## Data Quality and Annotation
As well-structured and high-quality data is essential to higher-level analyses
and data reuse, we implement multiple layers of data documentation, annotation
and quality control.
When uploading data to the RESOLUTE database, a submission form requires the
author to provide annotation and metadata, including consistent linkage to
existing data. Once in the database, data can be reviewed and curated by all
participants. While standardized data annotation is required (see also _Making
data findable_ on page 15), an additionally implemented Wiki system encourages
also informal data annotation by free text comments.
As we expect RESOLUTE reagents and data to lay the foundation of future SLC
research, highest possible quality standards have to be met. We implement
quality control on three levels: A first quality control step happens at the
site of data generation, and specification of corresponding quality control
metrics (see _Data Types_ on page 5) is required when submitting to the
RESOLUTE database. In a second step of quality control, a _data release
proposal_ with a focus on data annotation and quality control has to be
prepared before the public release of a data set. In case of a scientific
publication in a journal, the peer-review process forms the third layer of
quality control and assessment of data annotation. To ensure a peer-review
step also for data releases without scientific publication, we implement _data
annotation and release workshops_ (see _Data Dissemination_ below).
## Data Dissemination
All relevant data, i.e. data generated or collected by the RESOLUTE project
that passed certain quality control and data appraisal procedures, should be
rapidly published open access. Publication procedure follows one of three
different routes (please also refer to the three columns in the central panel
_RESOLUTE database_ in Figure 2):
* **data release:** Any participant can propose data for public release by submitting a written _data release proposal_ to the Work Package 8 team, specifying the following details:
* an abstract on the data, o the motivation for publication, o the data identifiers, o the annotation status,
* the standards compliance, o assessment of quality control, o license to use for publication.
Work Package 8 team will collect these proposals for further discussion in
_data annotation and release workshops_ , taking place regularly at consortium
meetings or possibly also as tele conferences in between. Participation at the
workshop is voluntary, and all participants will receive the collected data
release proposals two weeks before the workshop. At the workshop, the
proposals will be discussed in detail with the aim of curating data, extending
and refining annotation and strengthening quality control, but there will be
no immediate decision taken on publication. Within a week after the workshop,
Work Package 8 team together with the proposal’s authors will send out a
refined version of the data release proposal to the _publication and
dissemination officers_ of all consortium partners. In case there are no
objections within a 30 days review period, the data will be published and
openly accessible in the RESOLUTE database. Minor objections can be directly
resolved with the objecting participant, while major objections require an
adaption of the proposal, effectively restarting the review period.
* **supporting a scientific publication:** Data might also be disseminated as integral part or supplemental material to a scientific, peer-reviewed publication. For this route, the same _data release proposal_ as described above has to be submitted to Work Package 8 team, but a data annotation and release workshop is not required. All scientific publications will follow the _gold_ or _hybrid_ open access policy by submitting the manuscripts to open access journals or by paying for this option in the subscription-only journals. For more details on dissemination in the form of scientific publications, please refer to the _Dissemination and Exploitation Plan_ (Deliverable 9.2).
* **end of project:** At the end of the project (i.e. July 2023) we intend to release and make openly accessible a vast majority of all data within the RESOLUTE database that was not published before.
# FAIR Data
Participants of the RESOLUTE consortium acknowledge that living a culture of
openness for this project will benefit the whole community of solute carrier
research. By ensuring that all generated research data will be findable,
accessible, interoperable and reusable (FAIR), the RESOLUTE data- and
knowledgebase will efficiently lead to discovery, integration and innovation.
To support these efforts, we established contact to the IMI project FAIRplus,
which is expected to start early 2019\.
## Making data findable
To make something findable, a synthesis of identifiability, locatability,
actionability and discoverability is required. All data in the RESOLUTE
database will be _identifiable_ by their automatically assigned, unique and
persistent identifier (see _Data Identifiers_ on page 13). The data is
_locatable and actionable_ by a direct mapping of the identifier to an URI
(e.g. https://re-solute.eu/CL0001) as well as a corresponding CURIE (compact
URI, e.g. RES:CL0001), which will be registered at the resolving system of
identifiers.org 2 3 . The RESOLUTE data- and knowledgebase allows published
data to be _discoverable_ in a manual way via the user-friendly web interface,
and in an automated way by implementing the Bioschemas 4 specification.
In addition, data is annotated with an abstract and keywords (enabling free
text search), a set of generic metadata (i.e. the data author, a link to the
experiment and sample, timestamps, etc.) and a set of specific metadata (see
_Data Types_ on page 5). By building on established metadata standards (e.g.
Dublin Core 5 , ISA 6 , MIBBI 7 , etc.) and controlled vocabularies, we
increase the findability and interoperability of the data.
Furthermore, we also encourage participants to use Open Researcher and
Contributor Identifiers (ORCIDs) for referencing their contribution to
published datasets.
To further increase exposure and findability, we will register the RESOLUTE
data- and knowledgebase at FAIRsharing 8 , a registry for research standards
and databases, and re3data 9 , a registry for research data repositories.
## Making data openly accessible
All data generated or collected by the RESOLUTE project is posed certain
quality control and data appraisal procedures. Data of sufficient quality and
relevance will be stored and shared in the RESOLUTE database and, ultimately,
will be openly accessible. Refer to _Data Dissemination_ on page 14 for the
different routes and timelines of data publication.
Data is available at the RESOLUTE web portal (https://re-solute.eu;
Deliverable 8.6), which forms the entry point to the RESOLUTE data- and
knowledgebase. They provide open access to published scientific papers,
reports and research data without the need for authentication or special
software (apart from a standardscompliant modern web browser). In addition,
corresponding metadata and licensing information is provided.
We will also provide advanced tools for optimal data exploitation, i.e. a web
based query system featuring interactive graphs and reports, and APIs and
workflow support (e.g. KNIME 10 nodes) for automated database access and
data mining (Deliverable 8.7).
## Making data interoperable
Coherent use of non-proprietary, standardized data and metadata formats,
aiming at maximum compliance with open source software applications, ensures
interoperability of RESOLUTE data. To enable inter-disciplinary
interoperability, relevant existing resources are referenced and metadata
standards established across research domains are used (see also _Making data
findable_ on page 15).
## Increasing data re-use
We support data re-usability by a rapid data generation to publication
timeline, based on our streamlined data release process (see _Data
Dissemination_ on page 14). Upon publication, data is openly accessible (see
_Making data openly accessible_ above) and re-usable for third parties by a
permissive open access standard license, provided either by the Creative
Commons 11 or by the Open Data Commons 12 project. Data re-usability is
also increased by employment of strict data quality assurance processes (see
_Data Quality and Annotation_ on page 14).
Algorithms or analysis software tools developed in the course of the project
that might be needed to re-use data and re-produce results will be published
alongside the data.
# Data Security
While there is no highly sensitive data (like patient data) in the RESOLUTE
project and all data will be ultimately open access, data security is still a
necessary and important issue. The following paragraphs list actual
implementation and infrastructure details ensuring secure data storage, access
and transfer.
**Figure 3.** Overview on RESOLUTE data- and knowledgebase infrastructure and
technology stack.
## Secure Storage
Documents like SOPs, protocols, reports, or meeting minutes, are
collaboratively written and stored in a SharePoint environment via a cloud-
hosted service (Microsoft Office 365).
Data (primary, derived and interpreted data) are stored at local IT
infrastructure of CeMM, managed by the CeMM IT department. The IT
infrastructure at CeMM is currently in the process of receiving a significant
upgrade to its storage hardware, and is expected to be fully operational mid
of 2019. The full scale of estimated required storage volume for RESOLUTE (see
_Data Stewardship_ on page 18) will then be available.
To minimize the risk of data loss, a local backup strategy as well as a cloud
backup strategy is in place. The local backup runs incrementally and writes to
magnetic tapes (IBM Backup system). The cloud backup is provided by a
Microsoft Azure cloud service and fulfils the rules of the EU General Data
Protection Regulation (GDPR). To rule out data manipulation or corruption we
employ MD5 checksums.
Access to the CeMM storage server is strictly restricted and available only
from within the CeMM intranet. Therefore, special means of secure data access
and secure data transfer have to be established to allow participants to
submit or review internal data, and to allow public users to view and download
published data.
## Secure Access
Participants of the RESOLUTE consortium have access to all the documents and
information in SharePoint via a personalized RESOLUTE account, managed by a
Microsoft Azure Active Directory service. Implementing an OAuth2 workflow, the
same RESOLUTE account is used to securely access data and information in the
internal sections of the RESOLUTE database and the RESOLUTE knowledgebase (see
_Data Curation: the RESOLUTE Data- and Knowledgebase_ on page 12).
Authenticated as well as unauthenticated (i.e. public) access to data and
information in the RESOLUTE web portal, the RESOLUTE knowledgebase, and the
RESOLUTE database will be protected by implementation of transport layer
security using the secure communication protocol (HTTPS).
## Secure Transfer
Data is produced by individual participants locally, but managed and stored
centrally at the CeMM storage server. Offering a smooth user experience in
this process requires fast and secure data transfer. As the CeMM storage
server is only accessible locally (see _Secure Storage_ on page 17), remote
access and transfer via the internet to the CeMM storage server has to be
mediated by some means. We employ three strategies for secure transfer:
1. As initial solution, we used the SharePoint environment, which provides an easy-to-use interface for data down- and upload. However, transfer of large files or transfer of a bulk of files turned out to be cumbersome, and data at the SharePoint has to be moved manually from or to the CeMM storage server.
2. As temporary solution, we set up a SFTP server with one dedicated user account per participant, allowing for easy programmatic transfer of many and/or large files. Again, data at the SFTP server has to be moved manually from or to the CeMM storage server on demand.
3. Starting from mid of 2019, we will employ a dedicated data transfer server. Data will be transferred from the CeMM storage server to this data transfer server automatically upon demand, where it is then available for download via the RESOLUTE web portal, the RESOLUTE database or the RESOLUTE knowledgebase. The same setup will also be used for data upload / data submission.
# Data Sustainability
Published data is not only available via the RESOLUTE data- and knowledgebase,
but is also submitted to technology-specific or domain-specific, community-
accepted repositories wherever possible (e.g. EBI’s Sequence Read Archive for
next-generation sequencing data, ProteomeXchange for interaction proteomics
data, or ChEMBL for compound/transporter interaction data). For data-type
specific sustainability implementations please refer to _Data Types_ on page
5.
Data structure, data formats, metadata formats and also code base are built on
standards defined by major public initiatives, such as the World Wide Web
Consortium (W3C) or HUPO Proteomics Standards Initiative, ensuring
compatibility and sustainability.
# Data Preservation
All data entering the RESOLUTE database will be stored for the course of the
project. The RESOLUTE data- and knowledgebase will be supported by CeMM for at
least 5 more years after the end of the project. After that, for long-term
preservation, the diligent use of domain specific standards will ensure that
all data will be transferable to corresponding certified public data
repositories. A detailed data preservation plan will follow in the mid-term
update on the data management plan (Deliverable 8.2).
# Data Stewardship
CeMM is hosting the server infrastructure. We estimate production of a total
data volume of 300 TB throughout the project, corresponding to a data
production rate of 5 TB per month. Required resources for storage and backup
are provided by CeMM (see _Secure Storage_ on page 17) and are budgeted with a
total of 200,000 EUR.
CeMM (in particular Ulrich Goldmann) is also responsible for data management,
with strong support from UniVie (in particular Gerhard Ecker). At least one
representative per participant takes part in the monthly tele conferences on
RESOLUTE data management (Work Package 8).
Two individuals per participant are responsible for data dissemination as
_publication officers_ . In addition, all participants are encouraged to
engage in the regular _data annotation and release workshops_ (see _Data
Dissemination_ on page 14 ).
# Ethical and Legal Considerations
There is no personal data generated in the RESOLUTE project. However, data is
generated on human cell lines and for details please refer to the ethics
report (Deliverables 10.1 and 10.2).
External data which might be collected and re-used (e.g. for the RESOLUTE
knowledgebase) is scrutinized for ethical or legal compliance beforehand.
# Appendix
## Abbreviations
<table>
<tr>
<th>
API
</th>
<th>
Application Programming Interface
</th> </tr>
<tr>
<td>
AP-MS
</td>
<td>
Affinity purification mass spectrometry
</td> </tr>
<tr>
<td>
CURIE
</td>
<td>
Compact Uniform Resource Identifier
</td> </tr>
<tr>
<td>
CV
</td>
<td>
Coefficient of Variation
</td> </tr>
<tr>
<td>
DMP
</td>
<td>
Data Management Plan
</td> </tr>
<tr>
<td>
FAIR
</td>
<td>
Findability, Accessibility, Interoperability, and Reusability
</td> </tr>
<tr>
<td>
HTTPS
</td>
<td>
Hyper Text Transfer Protocol Secure
</td> </tr>
<tr>
<td>
ICP-MS
</td>
<td>
Inductively Coupled Plasma Mass Spectrometry
</td> </tr>
<tr>
<td>
ID
</td>
<td>
Identifier
</td> </tr>
<tr>
<td>
LC-MS/MS
</td>
<td>
Liquid chromatography–mass spectrometry
</td> </tr>
<tr>
<td>
MD5
</td>
<td>
Message Digest algorithm; used for calculating file checksums
</td> </tr>
<tr>
<td>
NGS
</td>
<td>
Next-Generation Sequencing
</td> </tr>
<tr>
<td>
OAuth2
</td>
<td>
Open Authorization 2.0
</td> </tr>
<tr>
<td>
ORCID
</td>
<td>
Open Researcher and Contributor Identifier
</td> </tr>
<tr>
<td>
PCA
</td>
<td>
Principal component analysis
</td> </tr>
<tr>
<td>
QSAR
</td>
<td>
Quantitative Structure–Activity Relationship
</td> </tr>
<tr>
<td>
RESOLUTE
</td>
<td>
Research Empowerment on Solute Carriers
</td> </tr>
<tr>
<td>
RNA-Seq
</td>
<td>
RNA (ribonucleic acid) sequencing
</td> </tr>
<tr>
<td>
SAR
</td>
<td>
Structure–Activity Relationship
</td> </tr>
<tr>
<td>
SFTP
</td>
<td>
Secure File Transfer Protocol
</td> </tr>
<tr>
<td>
SLC
</td>
<td>
Solute Carrier
</td> </tr>
<tr>
<td>
SOP
</td>
<td>
Standard Operating Procedure
</td> </tr>
<tr>
<td>
URI
</td>
<td>
Uniform Resource Identifier
</td> </tr>
<tr>
<td>
XML
</td>
<td>
eXtensible Markup Language
</td> </tr>
<tr>
<td>
</td>
<td>
</td> </tr> </table>
## Consortium Members
<table>
<tr>
<th>
**short name**
</th>
<th>
**full name**
</th> </tr>
<tr>
<td>
CeMM
</td>
<td>
CeMM Research Center for Molecular Medicine of the Austrian Academy of
Sciences
</td> </tr>
<tr>
<td>
UoX
</td>
<td>
University of Oxford
</td> </tr>
<tr>
<td>
UoL
</td>
<td>
University of Liverpool
</td> </tr>
<tr>
<td>
AXXAM
</td>
<td>
Axxam SpA
</td> </tr>
<tr>
<td>
ULei
</td>
<td>
Universiteit Leiden
</td> </tr>
<tr>
<td>
MPIMR
</td>
<td>
Max-Planck Institut für medizinische Forschung
</td> </tr>
<tr>
<td>
UniVie
</td>
<td>
Universität Wien
</td> </tr>
<tr>
<td>
Pfizer
</td>
<td>
Pfizer Ltd.
</td> </tr>
<tr>
<td>
Novartis
</td>
<td>
Novartis Pharma AG
</td> </tr>
<tr>
<td>
Boehringer
</td>
<td>
Boehringer Ingelheim
</td> </tr>
<tr>
<td>
Vifor
</td>
<td>
Vifor International Ltd.
</td> </tr>
<tr>
<td>
Sanofi
</td>
<td>
Sanofi Aventis Recherche et Développement
</td> </tr>
<tr>
<td>
Bayer
</td>
<td>
Bayer AG
</td> </tr> </table>
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1415_LITMUS_777377.md
|
# 1 Publishable Summary
The LITMUS Data Management Plan (DMP) is an important deliverable and will
evolve throughout the lifespan of the project. The initial version of the DMP
will describe the organisational and technical details about data collection,
storage, retention, destruction, privacy and confidentiality. The Plan also
includes details of the procedures for the transfer of data and confirmation
of compliance with national and EU law on the protection of individuals with
regard to the processing of personal data. The DMP is a living document and
will be updated throughout the project.
# 2 Introduction
A data management plan is a key aspect of any modern life sciences project
where the work plan proposes generating large scale, high throughput data.
“LITMUS Investigation: Testing Marker Utility in Steatohepatitis”, is just
such a project, with multiple parallel work packages, proposing the generation
of imaging, genetic, epigenetic, transcriptomic, metabolomic, lipidomic,
proteomic and metagenomic data as well as a range of targeted biomarkers,
epidemiologic, nutritional and clinical data. Section 0 looks at the data
management structure within LITMUS and the integration and sharing of data
between collaborators and work packages. Section 2.2 provides confirmation of
compliance said collaborators, with national and EU law on the protection of
individuals with regard to the processing of personal data.
In total, it is predicted that the LITMUS project will generate a vast amount
of data. With the exception of the raw imaging data that will be stored in a
specialised facility, all data generated in LITMUS will be archived in the
dedicated and purpose built UNEW Research Data Warehouse, which has a capacity
of 6PB. The majority of this data comes from standardised techniques and is
generated in standardised format. This, along with the anonymisation of
samples, will allow much of the data generated by LITMUS to ultimately be made
available for sharing in public repositories of research data.
The DMP focuses on the security and robustness of local data storage and
backup strategies, and on a plan for this repository-based sharing, where and
when appropriate. The DMP will describe the organisational and technical
details about data collection, storage, retention, destruction, privacy and
confidentiality. Based on the guidelines provided by the EU and the Digital
Curation Centre, a template DMP document that captures these key aspects has
been developed. Section 2.3 describes the development of this template and a
copy of the template itself is available in Appendix A.
This template document has then been adapted for each work package (and, for
specific tasks within work packages) to ensure that relevant information is
captured for all aspects of the project where significant data sets will be
generated. Section 2.4 herein describes this process. The individual plans are
described in detail from Section 3 of this document onwards.
Thus, the template, and the individually adapted work package plans are
compiled into this single document, the “LITMUS Data Management Plan”. The DMP
is a working document and will evolve throughout the life of the project. The
DMP is a living document and will be updated throughout the project.
## 2.1 Overview of Data Management in LITMUS
LITMUS is a multi-national, multi-disciplinary research programme that
involves over 50 collaborators, divided into eight work-packages:
* WP1 Project Coordination & Oversight
* WP2 Analysis, Evaluation & Evidence Synthesis
* WP3 Patient Cohorts & Bioresources
* WP4 Central Laboratory
* WP5 Imaging
* WP6 Reverse-translation & Pre-clinical models
* WP7 ‘QED’ Qualification, Exploitation & Dissemination
* WP8 Ethics requirements
All work-packages are tightly integrated so that data, samples and technical
expertise are shared throughout the programme, however, to reduce the risk of
bias and concerns about applicability in biomarker test performance,
individuals working in WP4-5 (some of whom may be using proprietary
technologies) will **_not_ ** have access to the clinical data, against which
the fidelity of each biomarker technology will be measured, until the analyses
are completed. The relationship between the workpackages, flow of data, and
this “firewall” is shown in Figure 1, below.
_Figure 1: Overview of LITMUS Data Management_
The project will generate the following types of data:
* Participant information (WP3 & WP5): anonymised participant data, biochemical/histological indices, etc. – text.
* Imaging data (WP5): raw image data and processed image data.
* Research data (WP4 & WP6): large ’omics datasets, results of individual experiments, etc.
* Evaluation data (WP3): e.g. results of biomarker utility testing – text, spreadsheets.
In addition to the clinical data that will be collected in WP3, the various
LITMUS experimental work packages (particularly, but not limited to, WP4, WP5
& WP6) will produce a large amount of diverse data. Analyses conducted in WP2
by AMC, UNEW, ORU and other partners will also generate a large body of
derived data from these raw inputs. With the exception of raw image data,
which will be handled within WP5 by specialists from PERS and ANT and stored
in a separate facility that is optimised for imaging data, all data generated
in the LITMUS project will be robustly archived to geographically distinct
physical servers in the dedicated and purpose built UNEW Research Data
Warehouse, which has a capacity of 6PB. All data will be stored in accepted,
community-wide standard formats (e.g. FASTQ/BAM for sequencing data). Where
possible, data processing will proceed via command line or programmatic
procedures, and scripts will be maintained alongside the data – as well as in
a widely-used, online code repository, using version control (e.g. GitHub).
Furthermore, all data will be annotated with standards-compliant metadata,
such that downstream submission to a suitable community repository will be
simplified once it becomes necessary (i.e. on publication). A lightweight
front-end, web-based system for access to raw LITMUS datasets will be produced
as an extension to the system already under development for the EPoS project.
The addition of access-control features, allowing specific data to be shared
with specific partners when appropriate, will be implemented to ensure the
robust processes for data handling and role-specific access within LITMUS are
adhered to.
Throughout the programme of work, LITMUS will endeavour to fulfil the FAIR
principles: that research data are Findable, Accessible, Interoperable and
Reusable. Supporting data used in academic peer reviewed publications will be
made available, after publication via a recognised suitable datasharing
repository (e.g. zenodo or national repository if available). This policy will
be followed unless a partner can show that disseminating this data will
compromise IP or other commercial advantage. Processes for addressing this are
defined within the LITMUS Consortium Agreement. The project will use the
metadata standards and requirements of the repository used for sharing the
data.
## 2.2 Compliance with Data Protection Law
By signing the LITMUS Consortium Agreement, all member organisations have
agreed to comply with key principles and applicable laws and regulations when
collecting, storing, using or transferring and Personal Data and/or Human
Samples for activities conducted within LITMUS, in accordance with _Appendix
2: Actions involving Personal Data and/or Human Samples, Consortium Agreement
for LITMUS_ . In particular, all members of the LITMUS consortium have agreed
to comply with all laws, rules, regulations and guidelines applicable to the
collection, use, handling, disposal and further Processing of the Personal
Data and the Human Samples (including but not limited to, as far as Personal
Data are concerned, the EU Data Protection Directive 95/46/EC or succeeding
regulations as the General Data Protection Regulation 2016/679 and
implementing national data protection laws), all as updated from time to time
and applicable laws and regulations relating to the privacy of patient health
information.
## 2.3 Development of the Template Plan
The template for the LITMUS data management plan was developed during the EPoS
project using guidance provided by the EU [1] and the Digital Curation Centre
[2], specifically the Checklist for a Data Management Plan, v4.0 [3]. The
template consists of nine _subparts_ – two introductory subparts outlining the
principles of the plan, and seven subparts describing the data itself, the
methods of collection, the plan for management and curation, data security,
sharing and the responsibilities of actors with respect to the data and the
DMP.
Each subpart of the DMP that requires individual contributions is accompanied
by guidance notes to assist Work Package leaders and data management
maintainers in filling out the plan for the work package under consideration
(see Appendix A).
All subparts of the plan are mandatory for all work packages, but some (e.g.
regulation of users, formal accreditation, etc.) may not be directly
applicable to all – this should be stated clearly for the relevant package and
subpart, the subpart heading should not be deleted.
## 2.4 Work Package Adaptation of the Template Plan
As referenced in Section 0, each work package (or WP task) within LITMUS
likely to generate significant data is required to complete the subparts 3 – 9
of the DMP template (see Appendix A for description). For many tasks, the core
information contained within these parts is likely to be similar, and so
adaptation of standard _boiler plate_ text is an accepted route for
streamlining this element of the deliverable, as long as the specific details
relevant to the work package or task are accurate.
Particular attention should be paid to the subparts describing the scale and
scope of the data, on the plan for data security and backup, and on the plan
for data sharing (where appropriate), bearing in mind the stated EU goal of
the Pilot Action on Open Access to Research Data “to improve and maximise
access to and re-use of research data generated by projects”.
It should also be noted that it is possible to exclude data from the open
access scheme where it involves identifiable information or where results can
reasonably be expected to be commercially or industrially exploited. Such
exceptions should be clearly noted in the relevant part of the DMP.
## 2.5 Conclusions
A robust Data Management Plan improves the understanding of the data to be
generated within a project, and of the requirements of securing and archiving
that data. It also highlights the data publication potential of a project –
data sets which can be released are identified early and appropriate steps can
be taken to ensure that sharing happens in a timely and efficient manner.
This document describes the development of just such a DMP for the LITMUS
project. This plan is a key element in maximising the impact of LITMUS, and
the plan itself is included in the appendices.
The data management plan is not a static document, but will continue to
undergo review as LITMUS progresses, with formal revisions due in Period 2 and
Period 4. New datasets will have their own DMP, and existing plans will be
checked as data is generated. Importantly the plan must also be implemented,
and progress checked against this implementation.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1417_AIMS-2-TRIALS_777394.md
|
# Executive Summary
AIMS-2-TRIALS is a project bringing together multiple stakeholders, including
autistic people themselves and their families, to support the development of
new therapies and a better understanding of challenges impacting both quality
and length of life. This document is a first draft of a data management plan
for the project, and is intended to be a living document. It describes how the
data collected by the project will be made findable, accessible, interoperable
and reusable under field-wide best practices as well as all relevant regional,
national, and international law.
# Introduction
The AIMS-2-TRIALS project brings together autistic people and their families,
academic institutions, charities and pharmaceutical companies to study autism
and provide an infrastructure for developing and testing new therapies. In
line with the autism community’s priorities, the consortium will also focus on
why some autistic people develop additional health problems that severely
impact both quality and length of life.
The purpose of data collection within the project is to begin to address a
number of challenges facing individuals with Autism Spectrum Disorder (ASD).
In particular, with the data collected by AIMS-2-TRIALS we aim to enable a
precision medicine approach to better target treatments to patients through
the use of validated stratification biomarkers, and by testing novel or
repurposed drugs in a highly trained European-wide clinical trials network for
ASD that links to other international efforts. To fully achieve these goals,
the data collected by the consortium needs to be findable, accessible,
interoperable and reusable (FAIR) and soundly managed.
To that end, this document describes the Data Management Plan (DMP) for
AIMS-2-TRIALS and is based on the Guidelines on FAIR Data Management in
Horizon 2020 document version 3.0 of 26 July 2016. The AIMS-2-TRIALS DMP
describes the data management life cycle for the data to be collected,
processed and generated within the project. Strategies to guarantee good data
management are also provided.
This DMP initial release includes a collection of the first ideas from the
AIMS-2-TRIALS partners and as the project will evolve and progress, the
consortium will produce a second (to be issued in Month 24) and a third (to be
issued at the end of the project, Month 60) release in order to include the
applied procedures in terms of methodology and the exhaustive group of data
collected processed and generated by the project.
# Data Summary
The data of AIMS-2-TRIALS are new data and data collected during the EU-AIMS
project (grant no. 115300). The data from EU-AIMS were stored by CEA and have
been transferred to INSPAS mid-2018.
The dataset includes questionnaires, eCRF, and analysis results (brain
imaging, genetics, electrophysiology eye-tracking, and neuropsychological
data). The dataset is composed of collected data sent by the different
partners of the consortium and of processed data generated by INSPAS or the
core analytics groups. These processed data are made available to the
consortium members after access is granted.
Currently, the data are in the state left by the CEA at the end of the EU-AIMS
project and transferred to INSPAS mid-2018.
Data formats are detailed in Table 1.
<table>
<tr>
<th>
**Data type**
</th>
<th>
**Collected**
</th>
<th>
**Processed**
</th> </tr>
<tr>
<td>
Questionnaires, eCRF,
Clinical data
</td>
<td>
json
</td>
<td>
json, csv, xlsx
</td> </tr>
<tr>
<td>
Genomics
</td>
<td>
Fastq, bam, vcf
Genotyping final report
</td>
<td>
bam, vcf g.vcf
</td> </tr>
<tr>
<td>
MRI
</td>
<td>
DICOM
</td>
<td>
NIFTII
</td> </tr>
<tr>
<td>
EEG
</td>
<td>
csv, matlab, vhdr, vmrk, eeg
</td>
<td>
csv, matlab
</td> </tr>
<tr>
<td>
MRS
</td>
<td>
NIFTI
</td>
<td>
LCModel output
</td> </tr>
<tr>
<td>
Eye-tracking
</td>
<td>
csv, matlab
</td>
<td>
csv, matlab
</td> </tr> </table>
Table 1. Data format for collected and processed data for each type
The expected size of the data is 1PB.
The data are collected to be made available to all the consortium members that
have been granted access by one or several core analytics groups. For the
AIMS-2-Trials project the data from the EU-AIMS project are re-used. These
data that were hosted by the CEA had been transferred to INSPAS mid-2018.
# FAIR Data
## Data discoverability (metadata provision)
No DOI are yet assigned to the data. The clinical data are stored into a
database integrated to Padawan using the FHIR standard. Wherever possible,
metadata will be coded using standard dictionaries and ontologies (e.g. HPO,
HL7). In case metadata standards do not exists, they will be created following
the guidelines of the _Clinical & Phenotypic Data Capture _ _Work Stream_ of
the _Global Alliance for_ _Genomics & Health (GA4GH) _ . The data and
metadata will be indexed regularly and made searchable through a faceted
search interface compliant with the _Discovery Work Stream_ of the GA4GH.
## Data identifiability
Each participant is assigned a identification number (PSC1) in each
investigative center. After transfer of the data to INSPAS the PSC1 code is
double-coded by conversion to a PSC2 code. In each questionnaire file, the
participant data can be found using this PSC2 code. Experiment files
(genetics, brain imaging, etc) of each participant are stored in a folder
named by this PSC2 code. The PSC2 is also added as a keyword for each file.
Other keywords describing the type of data are also added to each file. BIDS
format will be used for magnetic resonance imaging data, and other field-wide
standards will be used as appropriate.
Version numbers will be included for processed data. The possibility to
include version numbers for raw data is currently being evaluated.
## Making data openly accessible
### Data availability
During the project the data are shared only to consortium members. After the
project the data will be shared with researchers outside the project. These
data will be shared outside the consortium by integration with other
international efforts (e.g. EU ELIXIR), including the use of platforms for
data integration (e.g. GA4GH federated credentialing and tranSMART) that allow
federation and open access. This will include how best to enable the data to
be findable such as the eTRIKS IMI project data catalogue.
Data which can not be shared in raw format due to sensitivity, regulatory
reasons, or because of a lack of participant consent will only be shared to
the fullest extent possible under relevant law. Access may require recipients
to consent to a data sharing agreement (particularly in light of GDPR, and
other local regulations as relevant).
### Data location and accessibility
The data are hosted in Paris into the storage facility of INSPAS. At INSPAS,
the data are stored into Padawan, a data warehouse developed internally at
INSPAS. The data are accessible through a website connected to Padawan. A
simple web browser is enough to access and download the data for authorized
researchers. Documentation and source code of Padawan and the website are not
accessible.
The repository containing the data is saved on a daily basis. The recovery
system is tested multiple times every year.
### Data restriction
Access to the data is restricted during the project. Access is granted to
researchers from the consortium by Core Analytics Groups, depending of the
type of data is required. Access policies for data collection will be
constructed to respect data use restrictions specified by the consent with
which the data were obtained. Consent will be digitally coded and attached as
metadata using the system described _here_ as a foundation. In the future,
access to datasets may be streamlined through the integration of the systems
described _here_ , for identity concentration and federation, and _here_ ,
for algorithmically making access decisions based on user identity and access
policies. At the first connection an authorized user will have to accept Terms
and Conditions of the service. Identity of the researcher will be ascertained
by a login/password system.
## Making data interoperable
Questionnaires files and clinical data are stored in text files and in the
FHIR standard into the Padawan database. The experiments results are stored in
standardized format suitable to the data type (see Table 1). Data will be
shared according to current standards in the community (e.g., NIFTI for
magnetic resonance imaging data).
## Increase data re-use (through clarifying licenses)
### Data licensing
At the end of the project, data will be shared across a global federated
network with tiered access to data and metadata, following the model laid out
by the Beacon network and GA4GH.
### Reusability
Some data might not be shared because of their sensitivity or regulatory
reasons (eg. personal data with no consent for sharing outside the
consortium). The choice of the participant will in all cases be respected.
**Data quality assurance process**
When receiving the data from the CEA, a data integrity check has been
performed.
# Allocation of resources
The AIMS-2-Trials Project Manager at INSPAS is responsible for data management
of the project. The cost of the data management and for making data FAIR is
covered by the AIMS-2-Trials grant.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1418_DARE_777413.md
|
# Overview of data use in DARE
The DARE project provides a hyper-platform to help primarily researchers and
research developers (experts working on the borderline between a scientific
domain and IT to provide solutions targeting researcher end-users) deal with
huge datasets and collaborate. As a hyper-platform, DARE makes extensive use
of existing or under-development European and international e-infrastructures
and platforms to deliver its services.
DARE deals with huge datasets by exploiting the elasticity of European science
clouds, such as the EGI Federated cloud. It will also utilise EUDAT services
for caching as well as for storing final data products, in the process making
them available to the research community. The European open science cloud
(EOSC), driven by projects such as EOSC-Hub 1 , is expected to assimilate
existing services such as the above. DARE is therefore expected to be EOSC-
ready.
To deal with huge datasets and to enable collaboration, DARE will expose to
its immediate users a Python-based workflow specification library, dispel4py
2 . This will allow users to define computational tasks in increasing levels
of abstraction. For instance, a research developer might provide system-
specific implementations of a processing task, while an end-user researcher
might use or customise a higher-level description of the same process. This
makes it easier for end-users to exploit European e-infrastructures, it allows
for developers to make the most of their knowledge of the underlying systems,
as well as it provides the means between the two to collaborate. Automatically
mapping parts of workflows to underlying resources will also help the end-
users be efficient and productive. To provide linkage between components,
execution automation, allocation of certain resources, etc., DARE will depend
on a number of internal catalogues and data, the most important of which is
provenance.
DARE’s provenance solution will collect a number of data regarding datasets,
locations, users, processing histories, etc. This information will aid other
components of the DARE platform to make decisions on behalf of users, or in
its simplest instantiation, to provide monitoring tools to users during the
execution of experiments. The DARE provenance data is itself large in volume,
it requires the use of high-throughput technologies, and it will store data
with implications to user privacy (see section “Ethics, Data Security and
External Resources” below, and D9.1 for more information).
It then follows that DARE:
1\. Will make use of primarily external data to produce other data which
itself will also be transferred to external e-infrastructure services, from
the point of view of data governance 2. Primarily focuses on open research
data as its input
3. Will temporarily cache partial or complete datasets to aid computation and responsiveness
4. Will record extensive data provenance, including the source of a dataset, the processing DARE applied, the products created and their characteristics, the user on behalf of whom processing took place, etc. This information will remain internal to the DARE platform and it will be used for improving the performance of the DARE system.
## Use-cases
The DARE project includes the development of two use-cases, to showcase the
effectiveness and general usefulness of the DARE platform: use-case 1 / WP6
pertains to seismology and it addresses part of the requirements of the EPOS
community; use-case 2 / WP7 is related to climate science and it addresses
part of the requirements of the IS-ENES community.
The DARE use-case domains, as well as future domains that may make use of the
DARE platform, are the main source of external datasets. DARE may search for
domain datasets by integrating with relevant services. Depending on the
requirements of individual use-cases, it may also copy them temporarily to its
local cloud for processing. Alternatively, it may orchestrate transformation
where the data reside, if this is possible. The results of data processing,
mostly derivatives of open research data, as well as metadata and data
provenance will be associated by the DARE platform for use either directly by
the users, or implicitly by the platform to improve its performance. Valuable
data products will be archived making use of appropriate EUDAT services, as
well as of PID services, such as the ones to be developed by the Freya project
3 .
## Data use in the platform
### External Datasets
DARE, as a cloud-ready platform, integrated to the European e-infrastructures
ecosystem, will be able to make use of datasets also available via services
inside the same ecosystem. Most of the data processing initiated and managed
by DARE is envisaged to take place within its own cloud space. It follows that
to be usable by DARE, very large external data will have to be located on the
same cloud as DARE (e.g. by e-infrastructure providers co-locating datasets
and/or exposing them via services). An alternative scenario of external data
being used by DARE would be through institutes installing the DARE platform
locally and independently close to their data. DARE could then be extended via
its high-level services to connect to these local data sources and make them
available for processing.
DARE follows a similar policy regarding data products. Transient/intermediate
data products (e.g. via partial processing, or of little interest to the
domain scientists), and depending on the storage capacity of the cloud local
to the DARE platform, will be stored and managed locally. Larger datasets, or
datasets of value, or reusable datasets will be stored making use of external
e-infrastructure services, e.g. by exploiting suitable EUDAT services. These
data will be assigned PIDs as needed, with DARE maintaining cataloguing
information for future use. DARE aims to perform these operations with little-
to-no manual work required by researchers and research developers.
### Internal Datasets and Catalogues
In order for the DARE platform to be able to provide high-level programmable
services to its users, it will require to hold information about its
environment and of the environment of its domain-specific applications
locally. Data provenance, i.e. information collected and managed during the
execution of experiments, data transformations, data transferring, etc., is
central to DARE’s objectives. Data provenance will be complemented by
additional linked catalogues holding data regarding:
1. Processing element specifications and high-level programmable services
2. Internal/transient and external datasets
3. Linked cloud infrastructures, e.g. access points, location, interface information, etc.
4. Known and available infrastructures of a different kind, e.g. HPC or specialised institutional hardware.
5. User and user-group history and preferences
The DARE internal catalogues will be used by DARE components, such as the
dispel4py, for optimising the execution of workflows, experiments and data
transformations and for automating the use of known and linked
e-infrastructures and software platforms. In addition, they will be consulted
by domain-specific or platform interactive services to inform users of DARE’s
operation and to allow them to interact with processes or experiments under
execution.
We anticipate that the data provenance catalogue will itself be a big dataset
due to the accumulation of data throughout the lifetime of a DARE
installation. The other catalogues (1-5, above) are expected to be of a more
static nature.
## Stakeholders
The DARE project and platform are relevant to a number of user roles but the
primary focus is on research developers or engineers (used interchangeably)
and researchers. An additional user role is the practitioner role -
practitioners will typically make use of DARE indirectly and will have a
narrower set of requirements and interaction points with the platform.
### Data owner
Any of the user roles interacting with DARE may be a data owner. Any data
transformation that takes place either on open research data made available
within DARE, or on previously created data (open or restricted) within DARE
and which results in the creation of a new dataset, temporary or permanent,
open or restricted, is owned by the user who initiated the transformation. The
initiation of the transformation may take place either directly or indirectly
(i.e. via a 3rd-party application, for example see the use of DARE in the
IS/ENES use-case). Data ownership extends to provenance data, as well as to
processing elements and related implementations entered into corresponding
registries by any one user. Data owners decide whether individual pieces of
data should be openly accessible or restricted to a group of users or
institutions. DARE will implement only part of such ownership requirements
with an emphasis on open data and metadata.
### Research developer/engineer
A research developer or engineer is a domain expert with extensive knowledge
of building systems targeting users of the same domain. Research developers
are typically involved in services such as Climate4Impact 4 . DARE targets
the development of domain-specific applications and services by raising the
abstraction level of interaction with the underlying infrastructures and
therefore by making the work of research developers easier and more tractable.
A research developer will typically make use of the DARE API and of the
dispel4py workflow specification library to build 3rd-party services. DARE
developers may make certain data sets available to DARE for analysis and use
in higher-level applications. In the case these data sets are not open
research data, the developer or his/her institute will be the data owner.
### Researcher
A researcher is a direct user of the DARE platform, or they may interact with
it through another application or services. A researcher’s goal is to further
their research, to execute experiments, analyse and evaluate the results of
models, etc. Researchers may make data available to DARE or they may create
new datasets via using DARE.
### Practitioner
Practitioner are users of 3rd-party applications or services based on DARE,
with a narrower focus than that of a researcher. For instance, they may be
policy makers, emergency assessment experts, citizen scientists etc. Even
though such applications may be narrower in focus and functionality than an
application targeting researchers, practitioners may also create new data sets
using DARE services indirectly. In this case they become the owners of these
data sets.
# Datasets
Below we list the main datasets, both internal and external, to be used by the
DARE platform throughout the DARE project. These datasets are currently being
analysed as part of the user stories and the ongoing use-case requirements
tasks and will be specified in more detail in the mid-term DMP, due at the end
of month 18 of the project.
## CMIP5
<table>
<tr>
<th>
**Origin/Owner**
</th>
<th>
ESGF / Main node: _https://esgf-node.llnl.gov/projects/esgf-llnl/_
</th> </tr>
<tr>
<td>
**Expected size**
</td>
<td>
Total size: 3.3 PetaByte
</td> </tr>
<tr>
<td>
**Internal/External**
</td>
<td>
External. Depending on the task, DARE may choose to copy parts of the data
internally to carry out transformations and processing, or it may make use of
the ESGF processing nodes to delegate processing on behalf of the user. DARE
may cache results, or it may store final products using EUDAT services.
</td> </tr>
<tr>
<td>
**Interfacing with DARE (provisional)**
</td>
<td>
Via data locations either directly from the C4I platform, or through THREDDS
entries provided by C4I
</td> </tr>
<tr>
<td>
**Data type**
</td>
<td>
Climate data
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
NetCDF
</td> </tr>
<tr>
<td>
**License**
</td>
<td>
Open research data. Terms of use:
_https://cmip.llnl.gov/cmip5/terms.html_
</td> </tr> </table>
## CMIP6 (future)
<table>
<tr>
<th>
**Origin/Owner**
</th>
<th>
ESGF / Main node: _https://esgf-node.llnl.gov/projects/esgf-llnl/_
</th> </tr>
<tr>
<td>
**Expected size**
</td>
<td>
Total size: ~30 PetaByte
</td> </tr>
<tr>
<td>
**Internal/External**
</td>
<td>
External. Depending on the task, DARE may choose to copy parts of the data
internally to carry out transformations and processing, or it may make use of
the ESGF processing nodes to delegate processing on behalf of the user. DARE
may cache results, or it may store final products using EUDAT services.
</td> </tr>
<tr>
<td>
**Interfacing with DARE (provisional)**
</td>
<td>
Via data locations either directly from the C4I platform, or through THREDDS
entries provided by C4I
</td> </tr>
<tr>
<td>
**Data type**
</td>
<td>
Climate data
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
NetCDF
</td> </tr>
<tr>
<td>
**License**
</td>
<td>
Open research data. Terms of use:
_https://cmip.llnl.gov/cmip5/terms.html_
</td> </tr> </table>
## TDMT catalogue
<table>
<tr>
<th>
**Origin/Owner**
</th>
<th>
INGV
</th> </tr>
<tr>
<td>
**Expected size**
</td>
<td>
from GBs to TBs
</td> </tr>
<tr>
<td>
**Internal/External**
</td>
<td>
External. Based on the task requests, DARE should copy part of the data
internally and use them for processing and analyses.
</td> </tr>
<tr>
<td>
**Interfacing with DARE (provisional)**
</td>
<td>
Through the VERCE platform or the specific webservice of the database
</td> </tr>
<tr>
<td>
**Data type**
</td>
<td>
Seismological data describing seismic source parameters
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
quakeml
</td> </tr>
<tr>
<td>
**License**
</td>
<td>
Open research data
</td> </tr> </table>
## GCMT catalogue
<table>
<tr>
<th>
**Origin/Owner**
</th>
<th>
Harvard
</th> </tr>
<tr>
<td>
**Expected size**
</td>
<td>
from GBs to TBs
</td> </tr>
<tr>
<td>
**Internal/External**
</td>
<td>
External. Based on the task requests, DARE should copy part of the data
internally and use them for processing and analyses.
</td> </tr>
<tr>
<td>
**Interfacing with DARE (provisional)**
</td>
<td>
Through the VERCE platform or the specific webservice of the database
</td> </tr>
<tr>
<td>
**Data type**
</td>
<td>
Seismological data describing seismic source parameters
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
quakeml
</td> </tr>
<tr>
<td>
**License**
</td>
<td>
Open research data
</td> </tr> </table>
## Stations
<table>
<tr>
<th>
**Origin/Owner**
</th>
<th>
IRIS, INGV, GFZ, ETH, IPGP, LMU, NOA, KOERI and others
</th> </tr>
<tr>
<td>
**Expected size**
</td>
<td>
from GBs to TBs
</td> </tr>
<tr>
<td>
**Internal/External**
</td>
<td>
External. Based on the task requests, DARE should copy part of the data
internally and use them for processing and analyses.
</td> </tr>
<tr>
<td>
**Interfacing with DARE (provisional)**
</td>
<td>
Through the VERCE platform or the specific webservice of the databases
</td> </tr>
<tr>
<td>
**Data type**
</td>
<td>
Seismological data describing the parameters of the seismic stations
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
xml
</td> </tr>
<tr>
<td>
**License**
</td>
<td>
Open research data
</td> </tr> </table>
## Recorded waveforms
<table>
<tr>
<th>
**Origin/Owner**
</th>
<th>
EIDA-Orfeus
</th> </tr>
<tr>
<td>
**Expected size**
</td>
<td>
PBs
</td> </tr>
<tr>
<td>
**Internal/External**
</td>
<td>
External. Based on the task requests, DARE should copy part of the data
internally and use them for processing and analyses.
</td> </tr>
<tr>
<td>
**Interfacing with DARE (provisional)**
</td>
<td>
Through the VERCE platform or the specific web-service of the database
</td> </tr>
<tr>
<td>
**Data type**
</td>
<td>
Seismological data representing the recorded seismograms
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
seed or any other obspy readable format
</td> </tr>
<tr>
<td>
**License**
</td>
<td>
Open research data
</td> </tr> </table>
## Shakemaps
<table>
<tr>
<th>
**Origin/Owner**
</th>
<th>
INGV
</th> </tr>
<tr>
<td>
**Expected size**
</td>
<td>
TBs
</td> </tr>
<tr>
<td>
**Internal/External**
</td>
<td>
External. Based on the task requests, DARE should copy part of the data
internally and use them for processing and analyses.
</td> </tr>
<tr>
<td>
**Interfacing with DARE (provisional)**
</td>
<td>
Direct queries to INGV database
</td> </tr>
<tr>
<td>
**Data type**
</td>
<td>
Seismological data describing the ground motion parameters
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
binary, png/jpg
</td> </tr>
<tr>
<td>
**License**
</td>
<td>
Open research data
</td> </tr> </table>
## Green’s functions
<table>
<tr>
<th>
**Origin/Owner**
</th>
<th>
IRIS
</th> </tr>
<tr>
<td>
**Expected size**
</td>
<td>
PBs
</td> </tr>
<tr>
<td>
**Internal/External**
</td>
<td>
External. Based on the task requests, DARE should copy part of the data
internally and use them for processing and analyses.
</td> </tr>
<tr>
<td>
**Interfacing with DARE (provisional)**
</td>
<td>
Through the specific webservice of the database
</td> </tr>
<tr>
<td>
**Data type**
</td>
<td>
Seismological data describing the seismic wavefield for specific basis
functions
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
ascii or any other obspy readable format
</td> </tr>
<tr>
<td>
**License**
</td>
<td>
Open research data
</td> </tr> </table>
## Meshes and Wavespeed models
<table>
<tr>
<th>
**Origin/Owner**
</th>
<th>
Internal library
</th> </tr>
<tr>
<td>
**Expected size**
</td>
<td>
TBs
</td> </tr>
<tr>
<td>
**Internal/External**
</td>
<td>
Internal. The plan is to use the library of meshes and wavespeed models
already created for VERCE and enrich it with new models that users can select
for their experiments.
</td> </tr>
<tr>
<td>
**Interfacing with DARE (provisional)**
</td>
<td>
Accessing the internal database as done in the VERCE platform
</td> </tr>
<tr>
<td>
**Data type**
</td>
<td>
Seismological data describing the geometry and the physical properties of the
waveform simulation medium
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
ascii
</td> </tr>
<tr>
<td>
**License**
</td>
<td>
Depend on the sharing conditions of the data owner
</td> </tr> </table>
## Seismic source models
<table>
<tr>
<th>
**Origin/Owner**
</th>
<th>
Internal library
</th> </tr>
<tr>
<td>
**Expected size**
</td>
<td>
TBs
</td> </tr>
<tr>
<td>
**Internal/External**
</td>
<td>
Internal. The plan could be to create a library of possible source models for
given earthquakes (especially finite fault models) as already done for meshes
and wavespeed models. Users can select them for their experiments.
</td> </tr>
<tr>
<td>
**Interfacing with DARE (provisional)**
</td>
<td>
Accessing the internal database as done in the VERCE platform
</td> </tr>
<tr>
<td>
**Data type**
</td>
<td>
Seismological data describing the parameters of the seismic sources
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
ascii
</td> </tr>
<tr>
<td>
**License**
</td>
<td>
Depend on the sharing conditions of the data owner
</td> </tr> </table>
## Synthetic waveforms
<table>
<tr>
<th>
**Origin/Owner**
</th>
<th>
Internal library
</th> </tr>
<tr>
<td>
**Expected size**
</td>
<td>
From TBs to PBs. Expected to grow with the number of experiments performed
through the DARE platform.
</td> </tr>
<tr>
<td>
**Internal/External**
</td>
<td>
Internal. The plan could be to internally store the synthetic seismograms
after simulations, also for basic source mechanisms (i.e. Green’s functions)
and perturbed source parameters (i.e. derivative synthetics), in order to
recall and recombine them for experiments of Seismic Source (SS) analyses or
Ensemble Simulation (ES) analyses. This will reduce the computing demand. (See
deliverable D6.1).
</td> </tr>
<tr>
<td>
**Interfacing with DARE (provisional)**
</td>
<td>
Accessing the internal database as done in the VERCE platform
</td> </tr>
<tr>
<td>
**Data type**
</td>
<td>
Seismological data representing the seismograms simulated with given models of
the structure and the seismic source
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
ascii, seed or any other obspy readable format
</td> </tr>
<tr>
<td>
**License**
</td>
<td>
Depend on the sharing conditions of the data owner
</td> </tr> </table>
## Data Provenance Dataset
<table>
<tr>
<th>
**Origin/Owner**
</th>
<th>
DARE
</th> </tr>
<tr>
<td>
**Expected size**
</td>
<td>
Expected to grow linearly with the number of operations DARE performs,
potentially to hundreds of GBs
</td> </tr>
<tr>
<td>
**Internal/External**
</td>
<td>
Internal
</td> </tr>
<tr>
<td>
**Interfacing with DARE (provisional)**
</td>
<td>
Internal RESTful interface, internal direct DB access
</td> </tr>
<tr>
<td>
**Data type**
</td>
<td>
Semi-structured data with relations, MongoDB storage
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
JSON
</td> </tr>
<tr>
<td>
**License**
</td>
<td>
Restricted access and non-distributable data. To be acquired and used
indirectly according to the DARE platform use terms and conditions, to be
finalised in line with D9.1.
</td> </tr> </table>
## Registry of Processing Elements
The registry of processing elements (RPE) catalogues the signatures and
implementations of primarily dispel4py processing elements (PEs). Due to
dispel4py’s composability, some of these PEs correspond to larger workflows.
<table>
<tr>
<th>
**Origin/Owner**
</th>
<th>
DARE
</th> </tr>
<tr>
<td>
**Expected size**
</td>
<td>
MBs
</td> </tr>
<tr>
<td>
**Internal/External**
</td>
<td>
Internal
</td> </tr>
<tr>
<td>
**Interfacing with DARE (provisional)**
</td>
<td>
Internal RESTful interface, internal direct DB access
</td> </tr>
<tr>
<td>
**Data type**
</td>
<td>
Semi-structured data with relations, MySQL/MariaDB storage
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
Binary
</td> </tr>
<tr>
<td>
**License**
</td>
<td>
PE signatures: open; Implementations: choice of open or restricted
</td> </tr> </table>
## Registry of Internal Datasets
The registry of internal datasets (RID) catalogues datasets which are internal
to DARE, temporary or more permanent. These datasets are typically generated
via some processing taken place within the DARE platform.
<table>
<tr>
<th>
**Origin/Owner**
</th>
<th>
DARE
</th> </tr>
<tr>
<td>
**Expected size**
</td>
<td>
MBs
</td> </tr>
<tr>
<td>
**Internal/External**
</td>
<td>
Internal
</td> </tr>
<tr>
<td>
**Interfacing with DARE (provisional)**
</td>
<td>
Internal RESTful interface, internal direct DB access
</td> </tr>
<tr>
<td>
**Data type**
</td>
<td>
Semi-structured data with relations, MySQL/MariaDB storage
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
Binary
</td> </tr>
<tr>
<td>
**License**
</td>
<td>
Open or restricted depending on user preference
</td> </tr> </table>
## Registry of Software Components
The registry of software components (RSW) catalogues the software available on
the DARE platform, along with typical usage patterns, characteristics, etc.
<table>
<tr>
<th>
**Origin/Owner**
</th>
<th>
DARE
</th> </tr>
<tr>
<td>
**Expected size**
</td>
<td>
MBs; GBs if the actual software is also stored.
</td> </tr>
<tr>
<td>
**Internal/External**
</td>
<td>
Internal
</td> </tr>
<tr>
<td>
**Interfacing with DARE (provisional)**
</td>
<td>
Internal RESTful interface, internal direct DB access
</td> </tr>
<tr>
<td>
**Data type**
</td>
<td>
Semi-structured data with relations, MySQL/MariaDB storage
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
Binary
</td> </tr>
<tr>
<td>
**License**
</td>
<td>
Restricted to the DARE platform. Information held within the RSW should be
irrelevant to DARE users and opening it might encourage resources abuse.
</td> </tr> </table>
# Ensuring FAIRness
The section summarises the core characteristics that (meta)data collections
and repositories should bear in order to adhere to FAIR principles and reports
on the means that DARE will use to conform to the FAIR paradigm. In cases
where FAIRness conflicts with broader ethical and security issues, we suggest
possible mitigation measures to be examined and applied to the project’s
outcomes.
## Findability
1. (meta)data are assigned a globally unique and persistent identifier
2. data are described with rich metadata
3. (meta)data are registered and indexed in a searchable resource
4. metadata specify the data identifier
## Accessibility
1. (meta)data are retrievable via their identifier using standardised communication and transfer protocols
2. the aforementioned protocols are open, free and implementable by third parties
3. the protocols foresee and include authentication and authorisation mechanisms to be used where necessary
4. metadata remain accessible even when the referenced data are no longer available
## Interoperability
1. (meta)data use a formal, accessible, shared and broadly applicable language for knowledge representation
2. (meta)data use vocabularies that adhere themselves to FAIR principles
3. (meta)data include qualified, resolvable references to other (meta)data
## Reusability
1\. (meta)data are released with a clear, consistent and accessible data usage
license 2. (meta)data are associated with their provenance
3\. (meta)data meet domain-relevant community standards
# Data Ethics, Privacy and Security
Even though DARE is a research project which will only have experimental and
demonstration deployments, data ethics, privacy and security are matters we
take very seriously. Deliverable 9.1 - _Data Ethics_ outlines the strategy of
DARE with regards to these issues, to ensure that DARE does not collect user
data beyond what is necessary to meet its goals and objectives and that DARE
users are appropriately informed. Further, it outlines strategies to protect
users from having their private information indirectly exposed via the use of
machine learning within the DARE platform.
# Concluding Remarks
The initial version of this deliverable outlines the DARE guidelines and
strategy for data management, to be adopted and implemented by all partners
throughout the project’s duration. It is expected that additional datasets and
types of (meta)data will occur as the project progresses. The data management
plan will evolve accordingly, incorporating specific actions for handling such
assets while pertaining to the overarching principles of FAIRness, security
and privacy outlined in this first version.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1419_OaSIS_777443.md
|
**1\. Introduction**
It is important to clarify that for the purpose of this deliverable to be
self-contained, the Consortium has decided to highlight (using blue font), the
sections added to the D5.9 IPR and Data Management Plan submitted 6 months
ago, to which this document intends to be an update (v2).
The additions are concentrated in 3 aspects fundamentally:
1. _Treatment of publications_ as one of the dissemination efforts intended in the project (section 2.3.3).
2. _Data Management in practice (sections 3.4.1 and 3.4.2 as part of section 3.4 Data_ _Security)_ , describing the tools used, both from the point of view of:
1. RDAs data sharing, storage and handling (research)
2. Internal project document management (project management)
3. _Update of Annexes,_ they have been updated and are presented in this second version of the Deliverable: updated NDA (Non-Disclosure Agreement) and update DLA (Database License Agreement).
Minor changes in references to previous IPR & DMP Deliverables timeline (D5.9)
and subsequent (D5.11) have been updated as well through the text.
That said, the purpose of this document continues to be what was stated in
D5.9, and presented as follows.
# 1.1. Object of this document (Summary)
The purpose of this document is to provide the plan for managing the data
generated and collected during the project; The Data Management Plan (DMP),
and to prepare a first version of the Intellectual Property Rights (IPR) plan
and make a brief introduction about the Plan of the use and dissemination of
foreground.
The target groups of this report are the consortium of the OaSIS project, the
participating Regional Development and/or Innovation Agencies and Small Medium
Enterprises (SMEs), and anyone outside the project who wants to use the
methodology and technology to be developed in it.
On the one hand, this document describes the way Intellectual Property Rights
are handled in OaSIS project. It summarises the general IPR agreement, which
is a part of the Consortium Agreement of the project; and addresses the main
issues of IPR management: the ownership, the protection of foreground,
dissemination, the access rights, and the confidentiality.
On the other hand, the Data Management Plan (DMP) describes the data
management life cycle for all datasets to be collected, processed, and/or
generated by OaSIS research project. It covers:
* the handling of research data during and after the project
* what data will be collected, processed or generated.
* what methodology and standards will be applied.
* whether data will be shared/made open and how
* how data will be curated and preserved
Following the EU’s guidelines, this document may be updated - if appropriate -
during the project lifetime (in the form of deliverables).
# 2\. IPR Management, Use and Dissemination Plan
## 2.1. Publishable Executive Summary
This document describes the way Intellectual Property Rights are handled in
the OaSIS project. It summarises the general IPR agreement, which is a part of
the
Consortium Agreement of the project. Besides, this document introduces the IPR
Management Group and addresses the main issues of IPR management. These issues
are the ownership, the protection of foreground, dissemination, the access
rights, and confidentiality.
This deliverable is also the initial version of the plan for disseminating the
activities and the generated knowledge and products of the OASIS project. This
plan will be completed along the life of the project and will not become final
until the end of it. We expect future versions to be lighter and complement
this deliverable concisely. The plan is designed not only as a vehicle to
communicate the activities of the project and for the general awareness of
opportunities but also as a “knowledge sharing” initiative, as a platform to
favour the establishment of new links between Regional Agencies, and to
provide direct or indirect services to SMEs (i.e. better segmentation) and in
to a lesser extent to favour new links between Regional Agencies and other
stakeholders (e.g. the industry and academic stakeholders).
## 2.2. Introduction
### 2.2.1. Purpose and target group
The purpose and scope of the report are to prepare a first version of the IPR
plan and make a brief introduction about the Plan of the use and dissemination
of foreground. The target groups of this report are the consortium of the
OASIS project, the participating Agencies and SMEs, and anyone outside the
project who wants to use the methodology and technology to be developed in it.
### 2.2.2. Contributions of partners
Linknovate has prepared the Deliverable 5.9 IPR & DMP following OpenAire
guidelines 1 ,2 and other H2020 project examples. All the partners of the
consortium have participated in the edition and review of it.
### 2.2.3. 2 Baseline
This document outlines the first version of the IPR plan within the OASIS
project. It touches various IPR-issues that will have to be dealt with the
course of the project.
In the process of designing the dissemination actions, it is necessary to bear
in mind the three main possible kinds of dissemination regarding the level of
involvement of the targeted audiences to fully understand the scope and the
activities of the plan: Dissemination for Awareness, Understanding, and
Action. This will be contemplated in detail in the following Deliverables
regarding the Dissemination Plan.
### 2.2.4. Relations to other activities
The Plan for IPR management will be updated taking into account the value-
added methodologies and technologies developed during the project.
## 2.3. Consortium Agreement
The Consortium Agreement (CA) was signed by all the project partners and has
come into force on the date of its signature by the Parties and shall continue
in full force and effect until complete fulfilment of all obligations
undertaken by the Parties under the EC-GA and the Consortium Agreement. The
OASIS CA describes the initial agreement between partners about Intellectual
Property Rights Management.
The present document summarises the main topics covered by the CA on IPR
management strategy and introduces as well the IPR Management Group, which is
responsible for monitoring the IPR issues, in this case the 3 consortium
partners: - LKN: Linknovate Science S.L.
* EURADA: European Association of Regional Development Agencies
* PK: Cracow University of Technology
By IPR, we mean “Intellectual Property Rights”, i.e. the rights of the
Background 3 and the rights of the Foreground 4 generated by the OASIS
partners and funded by the European Commission grant under the EC H2020 Grant
Agreement 777443.
The main concerns are naturally linked to the technology transfers mandatory
to achieve project results. Developments from partners that will be included
in a commercial product will need a license grant of some form between the
granting partner (the institution that developed the foreground) and the
receiving partner (the institution building the product incorporating this
foreground).
The fundamental rule governing the ownership of Results is expressed in the
Article 26.2 of the Grant Agreement and repeated in Article 8.1 of the
Consortium Agreement. The general principle is that Results are owned by the
partner that generates them.
### 2.3.1. Notion of Joint Ownership
Complementary to the abovementioned rule concerning ownership of the project
results is the principle of joint ownership laid down in the Article 26.2 of
the Grant Agreement.
Where several Beneficiaries have jointly carried out work generating
Foreground and where their respective contribution cannot be established
(“Joint Foreground”), they own the Foreground jointly.
The joint owners must agree (in writing) on the allocation and terms of the
exercise of their joint ownership (‘joint ownership agreement’), to ensure
compliance with their obligations under this Agreement.
Unless otherwise agreed in the joint ownership agreement, each joint owner may
grant non-exclusive licences to third parties to exploit jointly-owned results
(without any right to sub-license), if the other joint owners are given: (a)
at least 45 days advance notice and (b) fair and reasonable compensation.
Once the results have been generated, joint owners may agree (in writing) to
apply another regime than joint ownership (such as, for instance, transfer to
a single owner with access rights for the others).
Beneficiaries undertake that Joint Foreground joint ownership agreement shall
be done in good faith, under reasonable and non-discriminatory conditions.
In case of joint ownership of Foreground each of the joint owners shall be
entitled to use the joint Foreground as it sees fit, and to grant non-
exclusive licenses to third parties, without any right to sub-license, without
obtaining any consent from, paying compensation to, or otherwise accounting to
any other joint owner, unless otherwise agreed between the joint owners
(following art 8.2 of the Grant Agreement).
### 2.3.2. Protection of Foreground
Where Foreground is capable of industrial or commercial application (even if
it requires further research and development, and/or private investment), it
should be protected in an adequate and effective manner in conformity with the
relevant legal provisions, having due regard to the legitimate interests of
all participants, particularly the commercial interests of the other
beneficiaries.
Furthermore, the beneficiaries are obliged to protect the results of the
project adequately for an appropriate period and with appropriate territorial
coverage if:
1. the results can reasonably be expected to be commercially or industrially exploited and
2. protecting them is possible, reasonable and justified (given the circumstances).
Where a beneficiary, which is not the owner of the Foreground invokes its
legitimate interest, it must, in any given instance, show that it would suffer
disproportionately great harm.
Beneficiaries should, individually and preferably collectively, reflect on the
best strategy to protect in view of the use of the foreground both in further
research and in the development of commercial products, processes, or
services.
Applications for protection of results (including patent applications) filed
by or on behalf of a beneficiary must - unless the Agency 5 requests or
agrees otherwise or unless it is impossible - include the following:
“ _The project leading to this application has received funding from the
European Union’s Horizon 2020 research and innovation programme under grant
agreement No 777443”._
Furthermore, all patent applications relating to Foreground filed shall be
reported in the plan for the use and dissemination of Foreground, including
sufficient details/references to enable the Agency or the Commission to trace
the patent (application). Any such filing arising after the final report must
be notified to the Commission including the same details/references.
In the event the Agency assumes ownership, it shall take on the obligations
regarding the granting of access rights.
_**Transfer of Foreground** _
Each Party may transfer ownership of its own Foreground following the
procedures of the Grant Agreement (GA) Article 30.1.
It may identify specific third parties it intends to transfer the Foreground.
The other Parties hereby waive their right to object to a transfer to listed
third parties according to the GA Article 30.1.
The transferring Party that intends to transfer ownership of results shall,
however, give at least 45 days advance notice (or less if agreed in writing)
to the other Parties that still have (or still may request) access rights to
the results. This notification must include sufficient information on the new
owner to enable any beneficiary concerned to assess the effects on its access
rights. The Parties recognise that in the framework of a merger or an
acquisition of an important part of its assets, a Party may be subject to
confidentiality obligations, which prevent it from giving the full 45 days
prior notice foreseen in GA Article 30.1.
### 2.3.3. Dissemination
Dissemination activities including but not restricted to publications and
presentations shall be governed by Article 29 of the Grant Agreement.
The Parties shall ensure dissemination of their own Foreground as established
in the Grant Agreement provided that such dissemination does not adversely
affect the protection or use of Foreground and subject to the Parties’
Legitimate Interests.
Publication, Publication of another Party’s Foreground or Background and
Cooperation, as well as Use of names, logos and trademarks will be updated in
this IPR Plan deliverable in the next version ( _D5.11 in month 18_ ), as it
is influenced by the inputs and agreements reached for the _Communication and
Dissemination Plan v1 (D5.1 in month 6)_ .
## Software Access Rights
Parties’ Access Rights to Software do not include any right to receive Source
Code 6 or Object Code 7 ; ported to a certain hardware platform or any
right to receive Source Code, Object Code or respective Software Documentation
8 in any particular form or detail, but only as available from the Party
granting the Access Rights.
The intended introduction of Intellectual Property (including, but not limited
to Software) under Controlled License Terms in the Project requires the
approval of the PMB (Project Management Board) to implement such introduction
into the Consortium Plan.
Parties agree that Access Rights to Software, which is Background or
Foreground only include: access to the Object Code and, where normal use of
such an Object Code requires an Application Programming Interface (API) 9 .
Access to the Object Code and such an API; and, if a Party can show that the
execution of its tasks under the Project or the Use of its own Foreground is
technically or legally impossible without Access to the Source Code, Access to
the Source Code is to the extent necessary.
The Source Code shall be deemed as Confidential Information and the use of the
Source code shall be strictly limited to the necessary extent.
For the avoidance of doubt, any grant of Access Rights not covered by the CA
shall be at the absolute discretion of the owning Party and subject to such
terms and conditions as may be agreed between the owning and receiving
Parties.
6
Source Code: means software in human readable form normally used to make
modifications to it including, but not limited to, comments and procedural
code such as job control language and scripts to control compilation and
installation.
7
Object Code: means software information, being technical information used or,
useful in, or relating to the design, development, use or maintenance of any
version of a software programme.
8
Software Documentation: means software information, being technical
information used or, useful in, or relating to the design, development, use or
maintenance of any version of a software programme. 9
Application Programming Interface: means the application programming interface
materials and related documentation containing all data and information to
allow skilled Software developers to create Software interfaces that interface
or interact with other specified Software.
### 2.3.4. Access Rights for Use
_Access rights concern both Background and Results of the Project
(Foreground)._ This section will be updated in following versions of this
deliverable _(D5.11 in month 18 and D5.12 in month 24)_ as it will be
influenced by agreements and inputs in _Exploitation and Sustainability Plan
v3 (D5.7 in month 18). Two cases are foreseen for granting access to the
Parties – for the purpose of implementation of the Party’s task in the Project
and for exploiting the Party’s own results._
_General rules regarding Access rights to the project results are established
by Article 31 of the Grant Agreement and the principles governing Access
rights to the Background are laid down in the Article 24 of this Agreement._
Confidentiality (Non-Disclosure)
All information in whatever form or mode of transmission, which is disclosed
by a Party (the “Disclosing Party”) to any other Party (the “Recipient”) in
connection with the Project during its implementation and which has been
explicitly marked as “confidential”, or when disclosed orally, has been
identified as confidential at the time of disclosure and has been confirmed
and designated in writing within 15 days at the latest as confidential
information by the Disclosing Party, is “Confidential Information”.
In relation to Confidential Information, each Party undertakes not to use such
Confidential Information for any purpose other than in accordance with the
terms of the OASIS Grant Agreement and CA during a period of five years from
the date of disclosure by the disclosing Party, and to treat it and use
reasonable endeavours in order to ensure it is kept confidential and not
disclose the same to any third party without the prior written consent of the
owner in each case during the aforesaid period of five years.
The above shall not apply for disclosure or use of Confidential Information,
if and as far as the Recipient can show that:
* The Confidential Information becomes publicly available by means other than a breach of the Recipient’s confidentiality obligations;
* The Disclosing Party subsequently informs the Recipient that the Confidential Information is no longer confidential.
**2.4. Exploitable Background/Foreground and its Use**
OaSIS uses a number of existing available backgrounds of the parties to the
project, which was listed in the CA. In Table 1 it is showed the updated list.
According to the Article 24 of the Grant Agreement, the partners must identify
and agree (in writing) on the background for the action (‘agreement on
background’).
In this section, the intellectual property developed within the scope of the
project will be detailed, work-package by work-package. A list of foreseen
exploitable foreground will be found in Table 2. As the project evolves, more
exploitable products are expected to come up. Each information or knowledge
identified as a Background for the project shall be agreed upon all partners
and expressed in writing in the Attachment 1 to the Consortium Agreement
entitled: _Attachment 1: Background included._
Therefore, this list will be updated regularly by the Project Coordination or
when requested by any partner of the consortium.
<table>
<tr>
<th>
Partner nº
</th>
<th>
Participating Legal Name
</th> </tr>
<tr>
<td>
1
</td>
<td>
</td>
<td>
Cracow University of Technology – The Coordinator
</td> </tr>
<tr>
<td>
None
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
2
</td>
<td>
</td>
<td>
Linknovate Science SL
</td> </tr>
<tr>
<td>
•
</td>
<td>
Exte
</td>
<td>
nsive use of machine learning and natural language processing to identify the
</td> </tr>
<tr>
<td>
</td>
<td>
orga
</td>
<td>
nisations in the document that match the user's query and to aggregate the
</td> </tr>
<tr>
<td>
</td>
<td>
resul
</td>
<td>
ts in a unique ‘entity profile’ (record linkage).
</td> </tr>
<tr>
<td>
•
</td>
<td>
Deli
</td>
<td>
very of competitive intelligence insights in its current ‘commercial product’:
</td> </tr>
<tr>
<td>
</td>
<td>
the “
</td>
<td>
search engine” Linknovate.com. Data sources currently covered: Scientific
</td> </tr>
<tr>
<td>
</td>
<td>
Publ
</td>
<td>
ications, Conference Proceedings, Grants (EC Cordis, UK Gateway to
</td> </tr>
<tr>
<td>
</td>
<td>
Rese
</td>
<td>
arch (GtR), Narcis (Holland’s CRIS Systems), USA SBIR/STTR grant
</td> </tr>
<tr>
<td>
</td>
<td>
prog
</td>
<td>
ram and others), Patents and trademarks (USPTO and EPO), News
</td> </tr>
<tr>
<td>
</td>
<td>
(spe
</td>
<td>
cialized sources such as MIT Techreview, FastCompany, etc.), Web
</td> </tr>
<tr>
<td>
</td>
<td>
(mo
</td>
<td>
nitoring of corporate websites).
</td> </tr>
<tr>
<td>
•
</td>
<td>
The
</td>
<td>
use of different ranking retrieval algorithms, which exploit multiple features
of
</td> </tr>
<tr>
<td>
</td>
<td>
docu
</td>
<td>
ments for showing the most relevant results to a user query. These
capabilities
</td> </tr>
<tr>
<td>
</td>
<td>
lay t
</td>
<td>
he technology foundation for Trend identification, Expert search and Data
</td> </tr>
<tr>
<td>
</td>
<td>
Visu
</td>
<td>
alization based on R&D and R&I data.
</td> </tr>
<tr>
<td>
•
</td>
<td>
This
</td>
<td>
IP recognised by LKN as background IP is an essential part of LKN product:
</td> </tr>
<tr>
<td>
</td>
<td>
Link
</td>
<td>
novate.com
</td> </tr>
<tr>
<td>
This ba
</td>
<td>
ckground is considered trade secret / undisclosed know-how and business
</td> </tr>
<tr>
<td>
informat
</td>
<td>
ion by LKN; protected by Directive (EU) 2016/943 of the European
</td> </tr>
<tr>
<td>
Parliame
</td>
<td>
nt and of the Council of 8 June 2016 on the protection of undisclosed know-
</td> </tr>
<tr>
<td>
how and
</td>
<td>
business information (trade secrets) against their unlawful acquisition, use
and
</td> </tr>
<tr>
<td>
disclosur
</td>
<td>
e. 6
</td> </tr>
<tr>
<td>
3
</td>
<td>
EURADA
</td> </tr>
<tr>
<td>
None
</td> </tr> </table>
Table 1. List of Background included in project OASIS.
So far, the Consortium has not yet identified any Foreground IP that should be
mentioned. It is expected that IP may be generated from the tasks and actions
in WP3 (Data Collection and Big Data Analysis) and the methodology editions,
improvements or generated over the life cycle of the project.
<table>
<tr>
<th>
WP
</th>
<th>
Exploitable foreground description
</th>
<th>
Timetable
</th>
<th>
Expected
IPR protection
</th>
<th>
Owner &
other partners involved
</th> </tr>
<tr>
<td>
2,3, 4
</td>
<td>
Methodology for benchmarking SME
</td>
<td>
M3 to M24
</td>
<td>
[Estimated]
Appropriate
</td>
<td>
All partners [LKN,
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
segmentation support and segmentation measures.
Improvements in innovation methodologies
</th>
<th>
</th>
<th>
CC license
</th>
<th>
EURADA, CTT]
</th> </tr>
<tr>
<td>
3
</td>
<td>
Data analytics software tools
</td>
<td>
M12 to M24 (2019)
</td>
<td>
To be updated in following
IPR &DMP versions 7
</td>
<td>
LKN
</td> </tr> </table>
Table 2. Exploitable Foreground list in project OASIS.
### 2.5. Conclusions
This deliverable shows the Intellectual Property management in the OaSIS
project and summarises the general agreement, which is settled in the
Consortium Agreement. This report will be updated taking into account the
value-added methodologies and technologies developed during the project.
The Plan for the use and dissemination of foreground appeared in this report
will be used as a brief summary and a preview for the “Deliverable 5.1.
Communications and Dissemination Plan” and the “Deliverable 5.2. Exploitation
and Sustainability Plan” that will be prepared in the month 6 within the “WP5:
Communication, Dissemination and Exploitation led by EURADA. This plan will be
completed in the deliverable D5.1 and also along the life of the project. The
dissemination action is based on these three pillars: Dissemination for
Awareness, Understanding, and Action.
# 3\. Data Management Plan
The OaSIS project aims at providing a thorough, evidence-based analysis for
segmentation of innovative SMEs and the assessment of the effectiveness of the
multiple support mechanisms currently used in agencies across Europe.
Towards this end, the OaSIS project will use unique Big Data and Machine
Learning capabilities to develop a performance-based methodology for SME
segmentation which will allow the (Regional and National) Development Agencies
to target SMEs in their industrial fabric with the instruments that
statistically proved to produce the highest impact for similar types of
companies.
Following the EU’s guidelines regarding the DMP, this document will be updated
during the project lifetime (in the form of deliverables).
The following is the OaSIS Project Data Management Plan. It is structured as
recommended in the template for Data management Plans in _Guidelines on FAIR_
_Data Management in Horizon 2020_ .
## 3.1. Data Summary
This document is the OaSIS Project Data Management Plan (DMP). The DMP
describes the data management life cycle for all datasets to be collected,
processed, and/or generated by a research project. It covers:
* the handling of research data during and after the project
* what data will be collected, processed or generated.
* what methodology and standards will be applied.
* whether data will be shared/made open and how
* how data will be curated and preserved
To provide “data-driven” insights such as identification of “gazelle”
companies
(enterprises with a high growth potential), SMEs with high innovation
potential, SMEs with high internationalization potential, and capacity to
categorise them into clusters of interest, industries, and/or value chains;
the consortium shall document, analyse, and correlate the data collected and
shared by European Regional Development Agencies (the Agencies). The Agencies
sharing their data will be close collaborators of the project, and will have
Third Party status when the conditions set by the Consortium are met (see
details in Open Call for Agencies, publicly available in the project website
and the corresponding deliverable), and it is approved in the corresponding GA
Amendment by the EC.
The agencies will provide data in “.xml”, “.csv”, and other formats. The
consortium will consider the optimal manners to store and process this data,
at this first stage a relational database – SQL, for its analysis.
The insights generated will be of use of the project’s Agencies and its SMEs
clients.
This DMP covers two types of data subject to FAIR use:
1. The data shared by the collaborating agencies (in .xml and/or .csv formats, and under CC-BY licenses, a type of Creative Commons “share-alike” license 8 ), and subject to the confidentiality agreements in place by OASIS partners.
2. The insights generated by the consortium (in the form of data visualisations, reports, and deliverables).
## 3.2. FAIR Data
### 3.2.1. Making data findable, including provisions for metadata
The data produced and/or used in the project will be discoverable with
metadata. The consortium will automatically extract primary keywords from the
agencies’ data to optimise possibilities for re-use.
Naming conventions have not been defined yet. This will be one of the first
tasks to be performed when data collection from agencies’ begin.
End data products to be produced by OASIS, such as reports, white papers, etc.
will have version numbers, but several of the more basic datasets (SMEs data
visualisations for instance) will be dynamic in nature and some providers of
data may not define versions.
### 3.2.2. Making data openly accessible
The OASIS project covers two types of data subject to FAIR use:
1. The data uploaded by the collaborating agencies (in .xml, .csv and other formats), and subject to confidentiality agreements: updated v2 of NDA and Database License Agreement attached in the Annexes of this document.
2. The insights generated by the consortium (e.g. in the form of data visualisations, reports, and deliverables).
Regarding Agencies’ data, it will be closed/restricted. Only the respective
agency and the consortium will have access to this data, and only for project
purposes. The data shared by the Agencies will have creative commons license
CC-BY, which requires for attribution (give the author or licensor the credits
in the manner specified by these) but allows for copying, distributing,
displaying and performing the work and make derivative works and remixes based
on it.
As for OASIS data, or insights generated by the consortium based on agencies’
aggregated data, part of it will be open and part will have a commercial use
to further explore the sustainability of the technologies and services
developed under OASIS.
The Information Platform, contained under the Oasis Portal (
_https://oasisportal.eu_ ), will be the key tool to make the data and data
products accessible to a broad range of Stakeholders, including policymakers,
industry, and academia. Access will be possible for both verification and re-
use purposes and the Information Platform will contain view, search, and
visualisation services alongside standardised view- and download-features.
Software needed to access data that will be made publicly available is not
expected. If needed, and/or requested by our close collaborating Agencies, who
will work under Third party status (see Consortium Agreement and Deliverable
D1.2 Consortium Strategy) LKN (as technical lead partner) will consider
enabling a rest API service for data access; promoting and using CERIF (Common
European Research Information Format) 9 as metadata format when possible.
The Consortium will consider for the different types of data, associated
metadata, documentation, and code which will be made public to be deposited in
certified repositories, such as _OpenAire_ (the consortium will explore
appropriate arrangements with the repository), and in the project’s websites (
_www.projectoasis.eu_ and _https://oasisportal.eu_ ).
Registered access to the data (e.g. data insights, graphics, etc) accessible
via the project’s website will be required. The consortium will keep logs of
closed/restricted datasets (for the project partners), while log-in will be
asked for accessing open data. However, the data hosted in OpenAire will be
publically available (complying with the repository policy) and no logs will
be required.
### 3.2.3. Making data interoperable
The Consortium will have a focus on the interoperability of the data produced.
The consortium will allow for data exchange and re-use between researchers,
institutions, organisations, countries, etc. To make the data produced in the
project interoperable and consumable, the consortium will consider using CERIF
(the Common European Research Information Format) for the data that is not
restricted (data that is not shared by Regional Agencies but produced by the
Consortium).
### 3.2.4. Increase data re-use (through clarifying licenses)
In order to permit the widest re-use possible of the data, the openly accessed
datasets will be CC-BY licensed, requires for attribution (giving the author
or licensor the credits in the manner specified by these) but allows for
copying, distributing, displaying and performing the work and make derivative
works and remixes based on it.
Restricted datasets cannot be used by third-parties, as expressed in NDA and
the “Database license agreement”used for data sharing purposes with Agencies
and other organisations.
## 3.3. Allocation of Resources
Since the data gathered from Regional Agencies is heterogeneous and OASIS is a
data-intensive project, we estimate the costs for making the data FAIR between
3-5 person months. Costs associated with the data gathering, cleaning and
indexing, as well as open access capacities of research data have been
considered in the grant agreement.
The project partner in charge of the data management it is LKN, as technology
(WP3) and DMP lead. All partners are responsible for contributing to the DMP
and the _fair data_ use.
Data preservation is estimated for the project lifetime plus three years.
Longer-term data preservation will be considered with the mentioned data
repositories (e.g. openAIRE). Further decisions will be made by the Consortium
board updating this deliverable.
## 3.4. Data Security
Provisions for data security will differ with regards to the nature of the
data.
Information coming from open datasets will not receive any special treatment.
This information will be directly processed as it arrives to the platform, and
then indexed. Temporal copies of the original data will be kept in OASIS’
information processing nodes, but no extra security measures will be put in
place for this kind of information given its public nature.
## Confidential Data/Restricted Data obtained from the Agencies
Information from restricted datasets will be uploaded via SFTP 10 , given it
provides secure file transfer and manipulation functionality over any reliable
data stream. Information will be stored in a dedicated GNU/Linux server with
state-of-the-art safety measures. In particular, SFTP will perform chroot for
every user agency to avoid access to the stored information for non-authorised
parties. Access policies to the server will be restricted to LKN employees,
who have signed appropriate NDAs. Weekly incremental backups are planned in a
secondary, equally secure repository.
Regarding data and cybersecurity, the following standard measures are in place
in order to reduce the risk:
* End to end encryption of data in transition.
* Encryption of sensitive data at rest to minimise the exposition to nonauthorised access event.
* Periodic vulnerability testing of the secure servers.
* Standardised enforced data deletion policy of sensitive data when required by a user.
* Advance user-level data security measures with fine-grained access control.
Regarding personal information: it is not part of the core and objectives of
the OaSIS project to deal with any personal data, but professional and
commercial data which can be used for a better segmentation of SMEs (such as
number of employees, sector, chain of value, turnover, growth over time,
participation in advanced services or events, and others). However in the case
that any agency shares with OASIS any information containing personal data we
plan to act according to the GDPR regulation, in effect after May 25, 2018,
and according to the procedures local Data Protection Agency (AGPD ).
For these matters, Jose López-Veiga is designated as our Data Protection
Delegate; he will coordinate the register under the AGPD our data processing
activities and will be in charge of the risk analysis over the personal data.
Diego Garea is appointed as of 1 st October 2018 the new designated Data
Protection Delegate since José López is leaving his full-time role Linknovate
in the short term.
Moreover, an audit of the security measures will be performed with the results
of the risk analysis producing a concise report which will update this
deliverable (IPR & DMP) in the updated versions that follow this deliverable
D5.9. According to the GDPR and AGPD regulations, we will communicate
immediately any breach of the security controls over the personal data.
Furthermore, as stated in our agreements (NDA and Database License Agreements)
with the Agencies, who follow in the Annexes of this deliverable:
_A restricted access procedure shall be implemented for each of the Partner’s
employees and collaborators, in particular Parties shall:_
− _use the data solely in relation to the purpose of the Project;_
− _disclose the data solely to its employees or other associates, whatever the
legal basis of their cooperation is, who are directly involved in the Project
implementation and who have a reasonable need to know the content of the
database for the purposes of the duties assigned to as a result of or in
connection with the Project (“Recipient”);_
− _ensure that every Recipient who receives or obtains access to the database
shall be aware of its confidential nature and shall comply with the
restrictions on non-disclosure;_
− _notify the disclosing Agency as soon as practicable after it becomes aware
of any event of loss or unauthorised disclosure or use of any substantial part
of the database;_
### 3.4.1. OaSIS Research: Data sharing in practice
As estimated in D5.9 previous points, different alternatives where envisioned,
being SFTP the preferred choice. In D5.10 we update this information, having
considered cloud services and peer solutions. However, given the need for
privacy, security, and regulatory requirements on the location of the data we
decided to store the information in our own operated server and use SFTP, SSH
File Transfer Protocol (also Secure File Transfer Protocol, or SFTP).
SFTP is a network protocol that provides file access, file transfer, and file
management over any reliable data stream. It was designed by the Internet
Engineering Task Force (IETF) as an extension of the Secure Shell protocol
(SSH) version 2.0 to provide secure file transfer capabilities. The IETF
Internet Draft states that, even though this protocol is described in the
context of the SSH-2 protocol, it could be used in a number of different
applications, such as secure file transfer over Transport Layer Security (TLS)
and transfer of management information in VPN applications.
Other alternatives were considered such cloud services (Dropbox, Google Drive,
etc.) or peer solutions (Syncthing), however, given the need for privacy,
security, accountability, and regulatory requirements on the location of the
data we decided to store the information in our self-operated server.
The Consortium opted for what is considered the safest route without
overcomplicating the user experience: SFTP (Secure File Transfer Protocol).
Oasis Consortium had to take into account that many of the managers / leaders
of the data analytics tasks in RDAs are not experts in IT and/or data
management. The procedure to share data needed to be as simplest as possible
to reduce the friction in the datasharing process, both in terms of support to
their staff and regarding the level of detail and learning curve inside the
RDA, without having to (necessarily) involve an ITbackground person.
### _Set up_
SFTP was set up in mid 2018. Several clients were suggested to RDAs for their
use, depending on the OS (Operative System), such as FileZilla for MacOS, and
WinSCP for Windows.
A username and password were created for one representative of each of the
collaborating RDAs, which would be shared with them and members of the
consortium.
### _Monitoring_
Logs of activity in the SFTP server are monitored so that non-authorized
activity can be identified and acted upon. This allows for one of the needed
functionalities in Oasis for secure protection of the shared and stored data
shared.
Members of the consortium (e.g. Cracow Technical University staff authorised
to research and participate in Project OaSIS, Eurada’s, and Linknovate’s
specific staff members) were authorised to enter the servers and their
activity is monitored as well.
These persons are:
* Mikolaj Golunski (Cracow Technical University, CTU)
* Piotr Ciochon (Cracow Technical University, CTU)
* Jakub Kruszelnicki (Cracow Technical University, CTU)
* Irena Jakubiec (Cracow Technical University, CTU)
* Anna Irmisch (Eurada)
* Esteban Pelayo (Eurada)
* Manuel Noya (Linknovate)
* Jose López Veiga (Linknovate)
* Diego Garea Rey (Linknovate)
* Sabela Cereijo (Linknovate)
* Gustavo Viqueira (Linknovate)
Figure 1. FileZilla client for secure access to SFTP was put in place for data
sharing in mid 2018.
The system is currently used by the Agencies who participated in the Oasis
Open Call and who are interested in becoming Third Parties or High Level
parties, with the capabilities to share their data with the Oasis Project
responsible persons.
#### 3.4.2. OaSIS Project Management: Data sharing in practice
For internal project management, Cracow Technical University (coordinator)
suggested an easy to use tool where the consortium could share easily the
draft documents, and non-critical data (e.g. confidential data shared by RDAs)
and automatically sync between the partners. Linknovate selected Syncthing as
the tool to do so, for the following reasons:
* Open-sourced tool
* Distributed peer to peer sync
* Readily available locally (in each member computer) and online via Syncthing website
* Automatic sync of documents uploaded to the project folders
And the following characteristics:
* _Private_ . Data is never stored anywhere else other than the clients’ computers. There is no central server that might be compromised, legally or illegally.
* _Encrypted_ . All communication is secured using TLS. The encryption used includes perfect forward secrecy to prevent any eavesdropper from ever gaining access to data.
* _Authenticated_ . Every node is identified by a strong cryptographic certificate. Only nodes explicitly allowed can connect to the user cluster.
Figure 2. Syncthing open source project in GitHub.
__Syncthing Technology_ _
Syncthing is a BYO cloud model where the users provide the hardware that the
software runs on. The larger the number of mesh devices, the more efficiently
data can be transferred. It supports IPv6 and, for those on IPv4 networks, NAT
punching and relaying are offered. Devices connecting to each other require
explicit approval, which increases the security of the mesh. All data, whether
transferred directly between devices or via relays, is encrypted using TLS
(Transport Layer Security).
Conflicts are handled with the older file being renamed with a "sync-conflict"
suffix (along with time and date stamp), enabling the user to decide how to
manage two or more files of the same name that have been changed between
syncing. GUI Wrappers can use these files to present the user with a method of
resolving conflicts without having to resort to manual file handling.
Figure 3. Syncthing running on one of the member’s computers.
Efficient syncing is achieved via compression of metadata or all transfer
data, block re-use and lightweight scanning for changed files, once a full
hash has been computed and saved. Moving and renaming of files and folders are
handled efficiently, with Syncthing intelligently processing these operations
rather than re-downloading data from scratch.
__Technical Characteristics of the Syncthing System_ _
Syncthing is the only open source system supporting multiple operating systems
and with the following characteristics.
_Operating system_
Windows, OS X, Linux, Android, BSD, Solaris
### _License_
Indicates the licensing model under which the software is published. For open
source systems this may be the specific license (i.e. GPL, LGPL, MIT, etc.),
for closed source/proprietary/commercial software this may be the model
(subscription per user, perpetual per device, etc.).
In the case of Syncthing, the license is MPL v2: a free and open source
software license developed and maintained by the Mozilla Foundation. It is a
weak copyleft license, characterised as a middle ground between permissive
free software licenses and the GNU General Public License (GPL) that seeks to
balance the concerns of proprietary and open source developers.
### _Portable_
Syncthing is portable: designed to run without needing to modify the
configuration of the computer it is run on. The name 'portable' comes from the
fact that these applications are intended to be able to be carried with the
user on a portable drive and run on any computer it was designed to run on,
even if the user does not have administrative privileges on the computer.
### _Detect conflict_
Syncthing is able to detect conflict. Indicates whether the software will
detect if the requested operation may result in data loss.
### _Renames/moves_
Syncthing provides this capability. When a file/directory in a given side of
the synchronisation is renamed/moved, the program repeats renaming/moving the
file/directory to the other side of the synchronisation. This feature saves
bandwidth when operating on remote systems but increases the analysis
duration.
### _Prior file versions, revision control_
Syncthing has this capability. Indicates whether the software allows one to
revert to an earlier version of a particular file/directory.
_Online storage:_ available.
### _Scheduling or service_
Syncthing provides this capability. Indicates whether the software will run
automatically either via a scheduler or by running as a system service. In
case Syncthing did not support it, a user would have to invoke the software
manually each time a synchronisation is needed.
Last stable version: 0.14.45 (June 2018).
Oasis consortium is currently making use of Syncthing for sharing and quickly
edit and sync files needed for day-to-day work on Oasis project.
#### 3.4.3. OaSIS Personal Data Management: Data sharing in practice
First and foremost, it is important to highlight that OaSIS Project research
does NOT concern personal data, hence all workflow regarding personal data is
purely for communication purposes and to increase transparency and trust with
the collaborating RDAs.
Following this, we have provided documents to inform of this to RDAs.
__Documents shared to inform RDAs_ _
The following details are shared with collaborating agencies, as part of the
OaSIS transparency in data sharing and data processing workflow concerning
GDPR (personal data handling, outside of OaSIS research):
<<
_Data relevant for the Project_
The aim of the Project is to improve the efficiency and effectiveness of the
innovation support provided in European regions and strengthening the capacity
of regional development agencies and other support providers. This objective
is going to be achieved by developing a new methodology of SME segmentation.
Research is focusing on currently used approaches to the segmentation of
innovating SMEs that shall result in identifying correlations and similarities
within the groups of SMEs that showed the highest increase in performance
after receiving specific support. In order to achieve this goal an analysis of
historical data regarding support provided to SMEs by Regional Development
Agencies is going to be performed. The data that is going to be used for
benchmarking purposes concerns solely business information on the SMEs, their
characteristics, support instruments applied and its consequences for the
performance of a given SME. Such data is not a considered a personal data.
_If you wish to get involved in OaSIS research project_
We assure you that research activity in the OaSIS project concerns solely
historical business and economic data, which is not under the protection
granted by GDPR. We do not ask you for any personal data, in fact article 4.4
of the initial NDA agreement expressly states that _if the database contains
any sensitive or personal information the Agency shall extract such
information from the database prior to its delivery._
The OaSIS project received funding from the European Union’s Horizon 2020
research and innovation programme and its implementation is verified by
competent European Institutions. Consortium Partners are obliged to comply
with ethical and research integrity principles, applicable international, EU
and national law.
By contacting us, a person that represents Regional Development Agency may
reveal his/her personal data. We use contact details provided solely to get in
touch with the Agency in order to give information on the project, its scopes,
outcomes, profits, and conditions of participation. All Consortium Partners
comply with GDPR and respective national regulations regarding personal data
protection.
>>
The following document, reproduced here in full is also shared and made public
with participating agencies 11 :
#### 3.5. Ethical Aspects
The OASIS Project consortium has not identified any specific ethical issues
related to the work plan, outcomes or dissemination, as the program does not
deal with any personal or medical data. We do note that individual partners
will have to adhere to any special ethical rules. In fact, this is a point
where the Consortium wants to update the Ethical Requirements for Data
treatment
The data management and compliance team are undertaking a significant review
of all policies and procedures on ethics and data use. We will continue to
work on the current data protection policy with the commitment to protect and
process data with adherence to current legislation and policies, in particular
General Data Protection Regulation that comes into force on 25 May 2018.
The project adheres to the commitment to holding any data in secure conditions
and will make every effort to safeguard against accidental loss or corruption
of data.
The Regional Development Agencies are obliged to extract the databases from
any sensitive or personal information prior to its disclosure.
Agencies will participate in the project based on informed consent, may
withdraw at any time, and can request information not be published. Informed
consent forms describe how data is stored and used and how confidentiality is
maintained in the long-term. Measures include data anonymisation, aggregating
individual-level data, and generalising the meaning of the detailed text.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1420_OaSIS_777443.md
|
# 1\. Introduction
## 1.1. Object of this document (Summary)
The purpose of this document is to provide the plan for managing the data
generated and collected during the project; The Data Management Plan (DMP),
and to prepare a first version of the Intellectual Property Rights (IPR) plan
and make a brief introduction about the Plan of the use and dissemination of
foreground.
The target groups of this report are the consortium of the OaSIS project, the
participating Regional Development and/or Innovation Agencies and Small Medium
Enterprises (SMEs), and anyone outside the project who wants to use the
methodology and technology to be developed in it.
On the one hand, this document describes the way Intellectual Property Rights
are handled in OaSIS project. It summarizes the general IPR agreement, which
is a part of the Consortium Agreement of the project; and addresses the main
issues of IPR management: the ownership, the protection of foreground,
dissemination, the access rights, and the confidentiality.
On the other hand, the Data Management Plan (DMP) describes the data
management life cycle for all datasets to be collected, processed and/or
generated by OaSIS research project. It covers:
* the handling of research data during and after the project
* what data will be collected, processed or generated.
* what methodology and standards will be applied.
* whether data will be shared/made open and how
* how data will be curated and preserved
Following the EU’s guidelines, this document may be updated - if appropriate -
during the project lifetime (in the form of deliverables).
# 2\. IPR Management, Use and Dissemination Plan
## 2.1. Publishable Executive Summary
This document describes the way Intellectual Property Rights are handled in
OASIS project. It summarizes the general IPR agreement, which is a part of the
Consortium Agreement of the project. Besides, this document introduces the IPR
Management Group and addresses the main issues of IPR management. These issues
are the ownership, the protection of foreground, dissemination, the access
rights, and the confidentiality.
This deliverable is also the initial version of the plan for disseminating the
activities and the generated knowledge and products of the OASIS project. This
plan will be completed along the life of the project and will not become final
until the end of it. We expect future versions to be lighter and complement
this deliverable concisely. The plan is designed not only as a vehicle to
communicate the activities of the project and for the general awareness of
opportunities but also as a “knowledge sharing” initiative, as a platform to
favour the establishment of new links between Regional Agencies, and to
provide direct or indirect services to SMEs (i.e. better segmentation) and in
to a lesser extent to favour new links between Regional Agencies and other
stakeholders (e.g. the industry and academic stakeholders).
## 2.2. Introduction
### 2.2.1. Purpose and target group
The purpose and scope of the report is to prepare a first version of the IPR
plan and make a brief introduction about the Plan of the use and dissemination
of foreground. The target groups of this report are the consortium of the
OASIS project, the participating Agencies and SMEs, and anyone outside the
project who wants to use the methodology and technology to be developed in it.
### 2.2.2. Contributions of partners
Linknovate has prepared the Deliverable 5.9 IPR & DMP following OpenAire
guidelines 1 , 2 and other H2020 project examples. All the partners of the
consortium have participated in the edition and review of it.
### 2.2.3. Baseline
This document outlines the first version of the IPR plan within the OASIS
project. It touches various IPR-issues that will have to be dealt with the
course of the project.
In the process of designing the dissemination actions, it is necessary to bear
in mind the three main possible kinds of dissemination regarding the level of
involvement of the targeted audiences to fully understand the scope and the
activities of the plan: Dissemination for Awareness, Understanding, and
Action. This will be contemplated in detail in the following Deliverables
regarding the Dissemination Plan.
### 2.2.4. Relations to other activities
The Plan for IPR management will be updated taking into account the value-
added methodologies and technologies developed during the project.
## 2.3. Consortium Agreement
The Consortium Agreement (CA) was signed by all the project partners and has
come into force on the date of its signature by the Parties and shall continue
in full force and effect until complete fulfilment of all obligations
undertaken by the Parties under the EC-GA and the Consortium Agreement. The
OASIS CA describes the initial agreement between partners about Intellectual
Property Rights Management.
The present document summarizes the main topics covered by the CA on IPR
management strategy and introduces as well, the IPR Management Group, which is
responsible for monitoring the IPR issues, in this case the 3 consortium
partners: - LKN: Linknovate Science S.L.
* EURADA: European Association of Regional Development Agencies
* PK: Cracow University of Technology
By IPR, we mean “Intellectual Property Rights”, i.e. the rights of the
Background 3 and the rights of the Foreground 4 generated by the OASIS
partners and funded by the European Commission grant under the EC H2020 Grant
Agreement 777443.
The main concerns are naturally linked to the technology transfers mandatory
in order to achieve project results. Developments from partners that will be
included in a commercial product will need a license grant of some form
between the granting partner (the institution that developed the foreground)
and the receiving partner (the institution building the product incorporating
this foreground).
The fundamental rule governing the ownership of Results is expressed in the
Article
26.2 of the Grant Agreement and repeated in Article 8.1 of the Consortium
Agreement. The general principle is that Results are owned by the partner that
generates them.
### 2.3.1. Notion of Joint Ownership
Complementary to the abovementioned rule concerning ownership of the project
results is the principle of joint ownership laid down in the Article 26.2 of
the Grant Agreement.
3
" _Background_ " is information and knowledge (including inventions,
databases, etc.) held by the participants prior to their accession to the
grant agreement, as well as any intellectual property rights which are needed
for to implement the action or exploit the results. Background is not limited
to input owned, but potentially extends to anything the beneficiaries lawfully
hold (e.g. through a licence with the right to sub-licence). It also extends
to input held by other parts of the beneficiary’s organisation.
4
“ _Foreground_ ” means the results, including information, materials and
knowledge, generated in a given project, whether or not they can be protected.
It includes intellectual property rights (IPRs such as rights resulting from
copyright protection, related rights, design rights, patent rights, and
others), similar forms of protections (e.g. sui generis right for databases)
and unprotected know-how (e.g. confidential material). Thus, foreground
includes the tangible (e.g. prototypes, source code and processed earth
observation images) and intangible (IPR) results of a project. Results
generated outside a project (i.e. before, after or in parallel with a project)
do not constitute foreground.
Where several Beneficiaries have jointly carried out work generating
Foreground and where their respective contribution cannot be established
(“Joint Foreground”), they own the Foreground jointly.
The joint owners must agree (in writing) on the allocation and terms of
exercise of their joint ownership (‘joint ownership agreement’), to ensure
compliance with their obligations under this Agreement.
Unless otherwise agreed in the joint ownership agreement, each joint owner may
grant non-exclusive licences to third parties to exploit jointly-owned results
(without any right to sub-license), if the other joint owners are given: (a)
at least 45 days advance notice and (b) fair and reasonable compensation.
Once the results have been generated, joint owners may agree (in writing) to
apply another regime than joint ownership (such as, for instance, transfer to
a single owner with access rights for the others).
Beneficiaries undertake that Joint Foreground joint ownership agreement shall
be done in good faith, under reasonable and non-discriminatory conditions.
In case of joint ownership of Foreground each of the joint owners shall be
entitled to use the joint Foreground as it sees fit, and to grant non-
exclusive licenses to third parties, without any right to sub-license, without
obtaining any consent from, paying compensation to, or otherwise accounting to
any other joint owner, unless otherwise agreed between the joint owners
(following art 8.2 of the Grant Agreement).
### 2.3.2. Protection of Foreground
Where Foreground is capable of industrial or commercial application (even if
it requires further research and development, and/or private investment), it
should be protected in an adequate and effective manner in conformity with the
relevant legal provisions, having due regard to the legitimate interests of
all participants, particularly the commercial interests of the other
beneficiaries.
Furthermore the beneficiaries are obliged to protect adequately the results of
project for an appropriate period and with appropriate territorial coverage
if:
1. the results can reasonably be expected to be commercially or industrially exploited and
2. protecting them is possible, reasonable and justified (given the circumstances).
Where a beneficiary, which is not the owner of the Foreground invokes its
legitimate interest, it must, in any given instance, show that it would suffer
disproportionately great harm.
Beneficiaries should, individually and preferably collectively, reflect on the
best strategy to protect in view of the use of the foreground both in further
research and in the development of commercial products, processes or services.
Applications for protection of results (including patent applications) filed
by or on behalf of a beneficiary must - unless the Agency 3 requests or
agrees otherwise or unless it is impossible - include the following:
“ _The project leading to this application has received funding from the
European Union’s Horizon 2020 research and innovation programme under grant
agreement No 777443”._
Furthermore, all patent applications relating to Foreground filed shall be
reported in the plan for the use and dissemination of Foreground, including
sufficient details/references to enable the Agency or the Commission to trace
the patent (application). Any such filing arising after the final report must
be notified to the Commission including the same details/references.
In the event the Agency assumes ownership, it shall take on the obligations
regarding the granting of access rights.
_**Transfer of Foreground** _
Each Party may transfer ownership of its own Foreground following the
procedures of the Grant Agreement (GA) Article 30.1.
It may identify specific third parties it intends to transfer Foreground. The
other Parties hereby waive their right to object to a transfer to listed third
parties according to the GA Article 30.1.
The transferring Party that intends to transfer ownership of results, shall,
however, give at least 45 days advance notice (or less if agreed in writing)
to the other Parties that still have (or still may request) access rights to
the results. This notification must include sufficient information on the new
owner to enable any beneficiary concerned to assess the effects on its access
rights. . The Parties recognize that in the framework of a merger or an
acquisition of an important part of its assets, a Party may be subject to
confidentiality obligations, which prevent it from giving the full 45 days
prior notice foreseen in GA Article 30.1.
### 2.3.3. Dissemination
Dissemination activities including but not restricted to publications and
presentations shall be governed by Article 29 of the Grant Agreement.
The Parties shall ensure dissemination of their own Foreground as established
in the Grant Agreement provided that such dissemination does not adversely
affect the protection or use of Foreground and subject to the Parties’
Legitimate Interests.
Publication, Publication of another Party’s Foreground or Background and
Cooperation, as well as Use of names, logos and trademarks will be updated in
this IPR Plan deliverable in the next version ( _D5.10 in month 9_ ), as it is
influenced by the inputs and agreements reached for the _Communication and
Dissemination Plan v1 (D5.1 in month 6)_ .
## Software Access Rights
Parties’ Access Rights to Software do not include any right to receive Source
Code 6 or Object Code 7 ; ported to a certain hardware platform or any
right to receive Source Code, Object Code or respective Software Documentation
8 in any particular form or detail, but only as available from the Party
granting the Access Rights.
The intended introduction of Intellectual Property (including, but not limited
to
Software) under Controlled License Terms in the Project requires the approval
of the PMB (Project Management Board) to implement such introduction into the
Consortium Plan.
Parties agree that Access Rights to Software, which is Background or
Foreground only include: access to the Object Code and, where normal use of
such an Object Code requires an Application Programming Interface (API) 9 .
Access to the Object Code and such an API; and, if a Party can show that the
execution of its tasks under the Project or the Use of its own Foreground is
technically or legally impossible without Access to the Source Code, Access to
the Source Code is to the extent necessary.
The Source Code shall be deemed as Confidential Information and the use of the
Source code shall be strictly limited to the necessary extent.
For the avoidance of doubt, any grant of Access Rights not covered by the CA
shall be at the absolute discretion of the owning Party and subject to such
terms and conditions as may be agreed between the owning and receiving
Parties.
### 2.3.4. Access Rights for Use
_Access rights concerns both Background and Results of the Project
(Foreground)._
This section will be updated in following versions of this deliverable _(D5.11
in month 18 and D5.12 in month 24)_ as it will be influence by agreements and
inputs in _Exploitation and Sustainability Plan v3 (D5.7 in month 18). Two
cases are foreseen for granting access to the Parties – for the purpose of
implementation of Party’s task in the Project and for exploiting the Party’s
own results._
_General rules regarding Access rights to the project results are established
by Article 31 of the Grant Agreement and the principles governing Access
rights to the Background are laid down in the Article 24 of this Agreement._
6
Source Code: means software in human readable form normally used to make
modifications to it including, but not limited to, comments and procedural
code such as job control language and scripts to control compilation and
installation.
7
Object Code: means software information, being technical information used or,
useful in, or relating to the design, development, use or maintenance of any
version of a software programme.
8
Software Documentation: means software information, being technical
information used or, useful in, or relating to the design, development, use or
maintenance of any version of a software programme.
9
Application Programming Interface: means the application programming interface
materials and related documentation containing all data and information to
allow skilled Software developers to create Software interfaces that interface
or interact with other specified Software.
Confidentiality (Non-Disclosure)
All information in whatever form or mode of transmission, which is disclosed
by a Party (the “Disclosing Party”) to any other Party (the “Recipient”) in
connection with the Project during its implementation and which has been
explicitly marked as “confidential”, or when disclosed orally, has been
identified as confidential at the time of disclosure and has been confirmed
and designated in writing within 15 days at the latest as confidential
information by the Disclosing Party, is “Confidential Information”.
In relation to Confidential Information, each Party undertakes not to use such
Confidential Information for any purpose other than in accordance with the
terms of the OASIS Grant Agreement and CA during a period of five years from
the date of disclosure by the disclosing Party, and to treat it and use
reasonable endeavors in order to ensure it is kept confidential and not
disclose the same to any third party without the prior written consent of the
owner in each case during the aforesaid period of five years.
The above shall not apply for disclosure or use of Confidential Information,
if and in so far as the Recipient can show that:
* The Confidential Information becomes publicly available by means other than a breach of the Recipient’s confidentiality obligations;
* The Disclosing Party subsequently informs the Recipient that the Confidential Information is no longer confidential.
## 2.4. Exploitable Background/Foreground and its Use
OaSIS uses a number of existing available background of the parties to the
project, which was listed in the CA. In the Table 1 it is showed the updated
list.
According to the Article 24 of the Grant Agreement the partners must identify
and agree (in writing) on the background for the action (‘agreement on
background’).
In this section the intellectual property developed within the scope of the
project will be detailed, work-package by work-package. A list of foreseen
exploitable foreground will be found in Table 2. As the project evolves, more
exploitable products are expected to come up. Each information or knowledge
identified as a Background for the project shall be agreed upon all partners
and expressed in writing in the Attachment 1 to the Consortium Agreement
entitled: _Attachment 1: Background included._
Therefore, this list will be updated regularly by the Project Coordination or
when requested by any partner of the consortium.
<table>
<tr>
<th>
Partner nº
</th>
<th>
Participating Legal Name
</th> </tr>
<tr>
<td>
1
</td>
<td>
Cracow University of Technology – The Coordinator
</td> </tr>
<tr>
<td>
None
</td> </tr>
<tr>
<td>
2 Linknovate Science SL
</td> </tr>
<tr>
<td>
• Extensive use of machine learning and natural language processing to
identify the organizations in the document that match the user's query and to
aggregate the
</td> </tr>
<tr>
<td>
results in a unique ‘entity profile’ (record linkage).
* Delivery of competitive intelligence insights in its current ‘commercial product’:
the “search engine” Linknovate.com. Data sources currently covered: Scientific
Publications, Conference Proceedings, Grants (EC Cordis, UK Gateway to
Research (GtR), Narcis (Holland’s CRIS Systems), USA SBIR/STTR grant program
and others), Patents and trademarks (USPTO and EPO), News
(specialized sources such as MIT Techreview, etc), Web (monitoring of
corporate websites).
* The use of different ranking retrieval algorithms, which exploit multiple features of documents for showing the most relevant results to a user query. These capabilities lay the technology foundation for Trend identification, Expert search and Data Visualization based on R&D and R&I data.
* This IP recognized by LKN as background IP is an essential part of LKN product: Linknovate.com
This background is considered trade secret / undisclosed know-how and business
information by LKN; protected by Directive (EU) 2016/943 of the European
Parliament and of the Council of 8 June 2016 on the protection of undisclosed
knowhow and business information (trade secrets) against their unlawful
acquisition, use and disclosure. 4
</td> </tr>
<tr>
<td>
3
</td>
<td>
EURADA
</td> </tr>
<tr>
<td>
None
</td> </tr> </table>
Table 1. List of Background included in project OASIS.
So far the Consortium has not yet identified any Foreground IP that should be
mentioned. It is expected that IP may be generated from the tasks and actions
in WP3 (Data Collection and Big Data Analysis) and the methodology editions,
improvements or generated over the life cycle of the project.
<table>
<tr>
<th>
WP
</th>
<th>
Exploitable foreground description
</th>
<th>
Timetable
</th>
<th>
Expected
IPR protection
</th>
<th>
Owner &
other partners involved
</th> </tr>
<tr>
<td>
2,3, 4
</td>
<td>
Methodology for benchmarking SME
segmentation support and segmentation measures.
Improvements in innovation methodologies
</td>
<td>
M3 to M24
</td>
<td>
[Estimated]
Appropriate CC license
</td>
<td>
All partners
[LKN,
EURADA, CTT]
</td> </tr>
<tr>
<td>
3
</td>
<td>
Data analytics software tools
</td>
<td>
M12 to M24 (2019)
</td>
<td>
To be updated in following
IPR &DMP versions 5
</td>
<td>
LKN
</td> </tr> </table>
Table 2. Exploitable Foreground list in project OASIS.
## 2.5. Conclusions
This deliverable shows the Intellectual Property management in the OaSIS
project and summarizes the general agreement, which is settled in the
Consortium Agreement. This report will be updated taking into account the
value-added methodologies and technologies developed during the project.
The Plan for the use and dissemination of foreground appeared in this report
will be used as a brief summary and a preview for the “Deliverable 5.1.
Communications and Dissemination Plan” and the “Deliverable 5.2. Exploitation
and Sustainability Plan” that will be prepared in the month 6 within the “WP5:
Communication, Dissemination and Exploitation led by EURADA. This plan will be
completed in the deliverable D5.1 and also along the life of the project. The
dissemination action is based on these three pillars: Dissemination for
Awareness, Understanding and Action.
# 3\. Data Management Plan
The OaSIS project aims at providing a thorough, evidence-based analysis for
segmentation of innovative SMEs and the assessment of the effectiveness of the
multiple support mechanisms currently used in agencies across Europe.
Towards this end, the OaSIS project will use unique Big Data and Machine
Learning capabilities to develop a performance-based methodology for SME
segmentation which will allow the (Regional and National) Development Agencies
to target SMEs in their industrial fabric with the instruments that
statistically proved to produce the highest impact for similar types of
companies.
Following the EU’s guidelines regarding the DMP, this document will be updated
during the project lifetime (in the form of deliverables).
The following is the OaSIS Project Data Management Plan. It is structured as
recommended in the template for Data management Plans in _Guidelines on FAIR_
_Data Management in Horizon 2020_ .
## 3.1. Data Summary
This document is the OaSIS Project Data Management Plan (DMP). The DMP
describes the data management life cycle for all datasets to be collected,
processed and/or generated by a research project. It covers:
* the handling of research data during and after the project
* what data will be collected, processed or generated.
* what methodology and standards will be applied.
* whether data will be shared/made open and how
* how data will be curated and preserved
To provide “data-driven” insights such as identification of “gazelle”
companies
(enterprises with a high growth potential), SMEs with high innovation
potential, SMEs with high internationalization potential, and capacity to
categorise them into clusters of interest, industries, and/or value chains;
the consortium shall document, analyse, and correlate the data collected and
shared by European Regional Development Agencies (the Agencies). The Agencies
sharing their data will be close collaborators of the project, and will have
Third Party status when the conditions set by the Consortium are met (see
details in Open Call for Agencies, publicly available in the project website
and in the corresponding deliverable), and it is approved in the corresponding
GA Ammendment by the EC.
The agencies will provide data in “.xml”, “.csv” and other formats. The
consortium will consider the optimal manners to store and process this data,
at this first stage a relational data base – SQL, for its analysis.
The insights generated will be of use of the project’s Agencies and its SMEs
clients.
This DMP covers two types of data subject to FAIR use:
1. The data shared by the collaborating agencies (in .xml and/or .csv formats, and under CC-BY licenses, a type of Creative Commons “share-alike” license 6 ), and subject to the confidentiality agreements in place by OASIS partners.
2. The insights generated by the consortium (in form of data visualizations, reports, and deliverables).
## 3.2. FAIR Data
### 3.2.1. Making data findable, including provisions for metadata
The data produced and/or used in the project will be discoverable with
metadata. The consortium will automatically extract primary keywords from the
agencies’ data in order to optimize possibilities for re-use.
Naming conventions have not been defined yet. This will be one of the first
tasks to be performed when data collection from agencies’ begin.
End data products to be produced by OASIS, such as reports, whitepapers, etc.
will have version numbers, but several of the more basic datasets (SMEs data
visualizations for instance) will be dynamic in nature and some providers of
data may not define versions.
### 3.2.2. Making data openly accessible
The OASIS project covers two types of data subject to FAIR use:
1. The data uploaded by the collaborating agencies (in .xml, .csv and other formats, and under CC-BY licenses), and subject to confidentiality agreements: NDA and Database License Agreement attached in the Annexes of this document.
2. The insights generated by the consortium (e.g. in form of data visualizations, reports, and deliverables).
Regarding Agencies’ data, it will be closed/restricted. Only the respective
agency and the consortium will have access to this data, and only for project
purposes. The data shared by the Agencies will have creative commons license
CC-BY, which requires for attribution (give the author or licensor the credits
in the manner specified by these) but allows for copying, distributing,
displaying and performing the work and make derivative works and remixes based
on it.
As for OASIS data, or insights generated by the consortium based on agencies’
aggregated data, part of it will be open and part will have a commercial use
to further explore the sustainability of the technologies and services
developed under OASIS.
The Information Platform, contained under the Oasis Portal (
_https://oasisportal.eu_ ), will be the key tool to make the data and data
products accessible to a broad range of Stakeholders, including policy makers,
industry and academia. Access will be possible for both verification and re-
use purposes and the Information Platform will contain view, search and
visualization services alongside standardized view- and download-features.
Software needed to access data that will be made publicly available is not
expected. If needed, and/or requested by our close collaborating Agencies, who
will work under Third party status (see Consortium Agreement and Deliverable
D1.2 Consortium Strategy) LKN (as technical lead partner) will consider
enabling a rest API service for data access; promoting and using CERIF (Common
European Research Information Format) 7 as metadata format when possible.
The Consortium will consider for the different types of data, associated
metadata, documentation, and code which will be made public to be deposited in
certified repositories, such as _OpenAire_ (the consortium will explore
appropriate arrangements with the repository), and in the project’s websites (
_www.projectoasis.eu_ and _https://oasisportal.eu_ ).
Registered access to the data (e.g. data insights, graphics, etc) accessible
via the project’s website will be required. The consortium will keep logs of
closed/restricted datasets (for the project partners), while log-in will be
asked for accessing open data. However, the data hosted in OpenAire will be
publically available (complying with the repository policy) and no logs will
be required.
### 3.2.3. Making data interoperable
The Consortium will have a focus on the interoperability of the data produced.
The consortium will allow for data exchange and re-use between researchers,
institutions, organizations, countries, etc. In order to make the data
produced in the project interoperable and consumable, the consortium will
consider using CERIF (the Common European Research Information Format) for the
data that is not restricted (data that is not shared by Regional Agencies, but
produced by the Consortium).
### 3.2.4. Increase data re-use (through clarifying licenses)
In order to permit the widest re-use possible of the data, the openly accessed
datasets will be CC-BY licensed, requires for attribution (giving the author
or licensor the credits in the manner specified by these) but allows for
copying, distributing, displaying and performing the work and make derivative
works and remixes based on it.
Restricted datasets cannot be used by third-parties, as expressed in NDA and
the “Database license agreement” use for data sharing purposes with Agencies
and other organisations.
## 3.3. Allocation of Resources
Since the data gathered from Regional Agencies is heterogeneous and OASIS is a
data intensive project, we estimate the costs for making the data FAIR between
3-5 person month. Costs associated with the data gathering, cleaning and
indexing, as well as open access capacities of research data have been
considered in the grant agreement.
The project partner in charge of the data management it is LKN, as technology
(WP3) and DMP lead. All partners are responsible to contribute to the DMP and
the _fair data_ use.
Data preservation is estimated for the project lifetime plus three years.
Longer term data preservation will be considered with the mentioned data
repositories (e.g. openAIRE). Further decisions will be made by the Consortium
board updating this deliverable.
## 3.4. Data Security
Provisions for data security will differ with regards to the nature of the
data.
Information coming from open datasets will not receive any special treatment.
This information will be directly processed as it arrives to the platform and
then indexed. Temporal copies of the original data will be kept in OASIS’
information processing nodes, but no extra security measures will be put in
place for this kind of information given its public nature.
## Confidential Data/Restricted Data obtained from the Agencies
Information from restricted datasets will be uploaded via SFTP 8 , given it
provides secure file transfer and manipulation functionality over any reliable
data stream. Information will be stored in a dedicated GNU/Linux server with
state-of-the-art safety measures. In particular, SFTP will perform chroot for
every user agency in order to avoid access to the stored information for non-
authorised parties. Access policies to the server will be restricted to LKN
employees, who have signed appropriate NDAs. Weekly incremental backups are
planned in a secondary, equally secure repository.
Regarding data and cyber security the following standard measures are in place
in order to reduce the risk:
* End to end encryption of data in transition.
* Encryption of sensitive data at rest in order to minimize the exposition to non-authorised access event.
* Periodic vulnerability testing of the secure servers.
* Standardised enforced data deletion policy of sensitive data when required by a user.
* Advance user-level data security measures with fine-grained access control.
Regarding personal information: it is not part of the core and objectives of
OASIS project to deal with any personal data, but professional and commercial
data which can be use for a better segmentation of SMEs (such as number of
employees, sector, chain of value, turnover, growth over time, participation
in advanced services or events, and others). However in the case that any
agency shares with OASIS any information containing personal data we plan to
act according to the GDPR regulation, in effect after May 25, 2018, and
according to the procedures local Data Protection Agency (AGPD ).
For these matters, Jose López-Veiga is designated as our Data Protection
Delegate, he will coordinate the register under the AGPD our data processing
activities and will be in charge of the risk analysis over the personal data.
Moreover, an audit of the security measures will be performed with the results
of the risk analysis producing a concise report which will update this
deliverable (IPR & DMP) in the updated versions that follow this deliverable
D5.9. According to the GDPR and AGPD regulations, we will communicate
immediately any breach of the security controls over the personal data.
Furthermore, as stated in our agreements (NDA and Database License Agreements)
with the Agencies, who follow in the Annexes of this deliverable:
_A restricted access procedure shall be implemented for each of the Partner’s
employees and collaborators, in particular Parties shall:_
<table>
<tr>
<th>
−
</th>
<th>
_use the data solely in relation to the purpose of Project;_
</th> </tr>
<tr>
<td>
−
</td>
<td>
_disclose the data solely to its employees or other associates, whatever the
legal basis of their cooperation is, who are directly involved in the Project
implementation and who have a reasonable need to know the content of the
database for the purposes of the duties assigned to as a result of or in
connection with the Project (“Recipient”);_
</td> </tr>
<tr>
<td>
−
</td>
<td>
_ensure that every Recipient who receives or obtains access to the database_
</td> </tr> </table>
_shall be aware of its confidential nature and shall comply with the
restrictions on non-disclosure;_
− _notify the disclosing Agency as soon as practicable after it becomes aware
of any event of loss or unauthorized disclosure or use of any substantial part
of the database;_
## 3.5. Ethical Aspects
The OASIS Project consortium has not identified any specific ethical issues
related to the work plan, outcomes or dissemination, as the program does not
deal with any personal or medical data. We do note that individual partners
will have to adhere to any special ethical rules. In fact this is a point were
the Consortium wants to update the Ethical Requirements for Data treatment
The data management and compliance team are undertaking a significant review
of all policies and procedures on ethics and data use. We will continue to
work on the current data protection policy with the commitment to protect and
process data with adherence to current legislation and policies, in particular
General Data Protection Regulation that comes into force on 25 May 2018.
The project adheres to the commitment to holding any data in secure conditions
and will make every effort to safeguard against accidental loss or corruption
of data.
The Regional Development Agencies are obliged to extract the databases from
any sensitive or personal information prior to its disclosure.
Agencies will participate in the project based on informed consent, may
withdraw at any time, and are able to request information not be published.
Informed consent forms describe how data is stored and used and how
confidentiality is maintained in the long-term. Measures include data
anonymization, aggregating individual-level data and generalizing the meaning
of detailed text.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1422_IMPACT-2_777513.md
|
# 1 Executive summary
The purpose of the Data Management plan is to describe the processes in place
for securing proper and safe handling of data during and after closure of the
project.
The document is divided into the following chapters:
Chapter 1: “Executive Summary” (this chapter)
Chapter 2: “Background”
Provides the context of the contents, specifically with respect to the scope,
Grant and Consortium Agreements
Chapter 3: “Objectives / Aims”
Defines the purpose and scope of the Data Management Plan (DMP)
Chapter 4: “Information Management & Policy”
Identifies the context of the data management within the IMPACT-2 domain.
Chapter 5: “Data Archiving and Preservation”
Identifies the procedures to be used for archiving, preservation and disposal
of data.
Chapter 6: “File naming conventions”
Describes the file naming convention for datasets filenames and the associated
coding for inclusion in archiving extracts and future dissemination
classification.
Chapter 7: “Conclusions”
Chapter 8: “References”
Reference documents governing the execution of IMPACT-2.
Chapter 9: “Antitrust statement”
# 2 Background
This document corresponds to Deliverable D9.1 “Data Management Plan (DMP)” for
the project IMPACT-2 which was submitted in response to the EU S2R JU 2017
call for members (proposal nr 777513). The GA was negotiated and closed during
the summer 2017 with official project start 1 st of September 2017. The
consortium is made up of 13 members.
**Table 1: IMPACT-2 members**
IMPACT-2 has 9 WPs
WP1 Management (TRV lead)
WP2 Analysis of the Socio-economic impact (DB AG lead)
WP3 SPD Implementation (TRV lead)
WP4 KPI tree (DLR lead)
WP5 Standardisation (SNCF lead)
WP6 Smart maintenance (DB AG lead)
WP7 Integrated Mobility (BT lead)
WP8 Human Capital (DB AG lead)
WP9 Dissemination (TRV lead)
The purpose of the DMP is to outline the policy for handling and storage of
data during the project and after its closure. Detailed descriptions on
approved or disapproved types of data or files are outside the scope of this
DMP. Furthermore, the DMP policy set out in this document should be seen as
complementary and not in conflict with relevant sections in the CA or policies
in place by the beneficiaries.
The project addresses WA1, WA2, WA3, WA4 and WA6 in the MAAP.
**Figure 1: Overview Work Areas**
# 3 Objectives / Aims
This document has been prepared to provide an instruction for how to handle
data in a safe, secure and legally correct way. This is of utmost importance
for the quality and integrity of the project results. IMPACT-2 will provide
answers to the following objectives:
* Evaluating the effects for mobility, society and environment induced by new technology solutions and developments
* Introducing relevant targets and needs to create a more attractive, a more competitive and a more sustainable rail system
* Defining System Platform Demonstrators (SPD) that represent future application use cases
* Defining Key Performance Indicators (KPIs) that enable the assessment of the Shift2Rail overall target achievement
* Smart maintenance concept for the whole railway systems which includes Condition Based Maintenance for passenger trains and integrated infrastructure & rail-vehicle data management
* Advanced business Service applications to be integrated in the Traffic Management process
* They cover new business software for freight operations to improve freight resource management and processes; crew and vehicle dispositions
* The railway staff using the S2R technologies need to be prepared and trained and the organisations need to be adopted for the faster evolution of technology in the future. A concept of the management of these changes on the human capital side (e.g. change in job profiles, skills and organisation) will be drafted.
Trends, scenarios and socio economic impact assessments will address societal
values of S2R whereas the KPI model focusses on S2Rs operational targets i.e.
step change improvements in costs, capacity and reliability. Both levels
interact during the course of the project. Exchange of data between the WPs
will be a key element for the successful running of IMPACT-2.
**Figure 2 IMPACT-2 General concept**
The aim of the DMP is to control and ensure quality of project activities and
to effectively and efficiently manage the data generated within IMPACT-2. It
also describes how data will be collected, processed, stored and managed
including aspects like external accessibility and long term archiving.
# 4 Information Management & Policy
Information management is the discipline by which information is managed
within an organisation. It covers collection, ownership, archiving and
disposal. Information management is also a dissociated part of management and
governance of companies and projects. This document does not intend to in
depth go through theoretical concepts of information management layers or
architectures. The purpose is to highlight the practical handling of data in
IMPACT-2 so it adequately deals with company concerns about data that may be
commercially sensitive but responds to the need for dissemination of project
results at conferences, academic papers and the like.
The DMP has been developed to address the following management and policy
objectives:
* Comply with participating companies commercial interests whilst at the same time allowing project activities to be carried out necessary for a successful completion of deliverables.
* Guaranteeing adequate quality of data
* Fulfil required storage of technical and financial data as requested by the H2020 rules and the CA
* Allow the coordinator and the steering committee to get timely and accurate information on the progress of individual tasks, milestones and deliverables
## 4.1 DMP – handling of data
The purpose of the DMP is to make sure that handling of data is done in a way
that safeguards quality, storage and confidentiality requirements. Overseeing
the appropriate handling of data for modelling and dissemination of project
results is a key management activity for the TMT which is composed of DLR,
ASTS, BT, CAF, DB, SNCF and TRV
Adequate handling of data is key to the successful completion of IMPACT-2. The
project will be dependent on gathering data from the other IPs and it will
generate data of importance not only for IMPACT-2 but for the S2R endeavour
itself. In particular the data “heavy” WPs handling of the data is described
below:
### 4.1.1 WP1 Management, Lead TRV
WP1 deals exclusively with management and coordination of the project. No
issues concerning collection, safe storage, quality of data or Meta data are
explicitly handled in WP1.
### 4.1.2 WP2 Analysis of the Socio-economic impact, Lead DB AG
All data and documents produced within the IMPACT-2 project are stored and
exchanged via the EC cooperation tool. The confidentiality of these data and
of the content is covered by the Grant Agreement, the Consortia Agreement and
the nondisclosure statement of the S2R partners.
### 4.1.3 WP3 SPD Implementation, Lead TRV
All data and documents produced within the IMPACT-2 project are stored and
exchanged via the EC cooperation tool. The confidentiality of these data and
of the content is covered by the Grant Agreement, the Consortia Agreement and
the nondisclosure statement of the S2R partners. Data that is provided by the
IPs and SPDs as input for mode choice modelling is covered by the
nondisclosure statement and confidentiality agreements of the S2R partners.
Location of digital data: Data is stored in a folder on a VTI server, for
which backup is made every night. In case of analogue data this is stored in
locked cabinets.
### 4.1.4 WP4 KPI tree, Lead DLR
Project management and project content related data and files: All data and
files directly related to the project IMPACT-2 are stored and exchanged via
the EC cooperation tool. The confidentiality of these data and of the content
is covered by the Grant Agreement, the Consortia Agreement and the
nondisclosure statement of the S2R partners.
Data that is provided by the IPs, TDs or SPDs as well as any other input for
the KPI development and calculation: All data is stored in a folder on a DLR
server, for which backup is made every night. The confidentiality of this data
is covered by the nondisclosure statement and confidentiality agreements of
the S2R partners.
### 4.1.5 WP5 Standardisation, Lead SNCF
Data exchange between SNCF and DB concerning Standardisation-data is done via
Email and stored in the EC cooperation tool. Row data provided by the IPs, TDs
and Shift2Rail project teams and any input to the Standardisation roadmap and
standardisation development plan are stored in the EC cooperation tool.
The confidentiality of these data and of the content is covered by the Grant
Agreement and the Consortia Agreement.
### 4.1.6 WP6 Smart maintenance, Lead DB AG
All data and files directly related to the project IMPACT-2 WP 6 are stored
and exchanged via the EC cooperation tool. The confidentiality of these data
and of the content is covered by the Grant Agreement, the Consortia Agreement
and the nondisclosure statement of the S2R partners
The CBM-data provided by IMPACT-2 partners should only be used within the S2R
project by the participants of WP 6. The data should not be shared with other
persons or third parties. The use of the data for public deliveries requires
the agreement of the partner who provides the data.
### 4.1.7 WP7 Integrated Mobility, Lead BT
In WP7 communication between the involved partners will be done by mail and
via on-line sessions. Pre-Drafts of the deliverables, templates and other
administrative documents will be distributed and stored via the tool.
Four Face2Face (F2F) Progress Review meetings are scheduled for 2018 to cover
for critical technical discussions and reviews. The number of F2F meetings
will be strongly reduced when entering in the phase of no-collaborative
development of the proposed prototypes.
### 4.1.8 WP8 Human Capital, Lead DB AG
Data exchange between DB and SNCF/CP concerning Human Capital-data is done via
Email. The confidentiality of these data and of the content is covered by the
Grant Agreement and the Consortia Agreement.
## 4.2 Data types
The minimum requirement on data handled in the S2R collaboration tool is, that
it is equitable with the descriptions of the annex 1 in the GA (DOA) that is
data types covering the tasks involved in carrying out the project.
## 4.3 Data quality
The responsibility for gathering data to be used for the creation of
scenarios, SPDs, KPI models and the like is delegated to the parties involved
in the corresponding tasks. Routine academic quality measures should be
applied by all parties in establishing quality and when relevant integrity.
## 4.4 Data sharing
The following system for classification of data sets will be used:
* CO0 – do not use the information as a reference or as a source for the project. Information will be only provided for controlling / checking and “fine-tuning” of tools. Information / Data shall not be used for sharing between the project partners. (Example: sensitive economic company owned data for the validation of the KPI-tool)
* CO1 - confidential level 1: data shall not be shared at any time during or after the project outside the original work package members
* CO2 - confidential level 2: data shall not be shared outside the consortium members
* CO3 - confidential level 3: data can be shared outside the consortium without restriction
## 4.5 Data rights
Users of the S2R collaboration tool will have different access rights based on
their contribution and role within the project. In order to secure that
relevant user get access to read / edit and add information the following
access rights are available and should be used accordingly:
### Guest
This is an observer role, with no activity but with the capacity to download.
Guest rights give access to all documents and meetings with Access field
marked as “Guest” in the tool (i.e. documents and meetings with access flagged
with higher rights will not be visible for Guests). **Limited user**
This is an active role but for those with limited activity attributed (not as
active as a User) and with the capacity to download. **User**
This is a role for members actively contributing in a “domain”, enabling them
to upload new documents.
**Power user**
This role has the same rights as User and, in addition, is able to create
meetings.
### Domain Administration
This role is assigned to leaders of domain. In addition to the Power User, the
Domain Admin can:
* Manage user rights within the domain;
* Change documents Status to “Issued”, confirming that the peer review process is completed;
* Create and manage action points from meeting.
### Project administration
The IMPACT-2 Project coordinator has overall project administration rights,
enabling to administrate the complete project.
• The same applies to the S2R JU Groups’ coordinators concerning the S2R JU
Group’s database
# 5 Data Archiving & Preservation
## 5.1 Archiving
At the formal project closure all the data material that has been collected or
generated within the project and registered in the S2R collaboration tool and
classified for archiving shall be copied and transferred to a digital archive
of the S2R collaboration tool. The JU S2R is responsible for its adequate long
term preservation as well as maintaining a system for queries and retrievals
for as long as the data files are to be kept. The relevant data created by
each IMPACT-2 project partner and not stored within the S2R collaboration tool
will be archived by each responsible partner.
Recording and archiving of audio or visual data files as well as personal data
will need written approval by the concerned subjects and is subject to the
responsibility of the collecting organisation.
## 5.2 Confidentiality
Information shall only be made available to those who are authorised to access
it. Information Owners are accountable for defining access to the information
they own. To safeguard and prevent unauthorised access to information,
Information Owners shall classify and govern information in accordance with
the data set classification taxonomy described in the previous chapter and
respecting commitments in the CA. The procedures implemented for data
collection, storage, protection, retention and destruction comply with the EU
Directive 95/46/EC on the protection of individuals with regard to the
processing of personal data and on the free movement of such data and Section
5 of the German Federal Data Protection Act of 22 May 2001 (BDSG).
## 5.3 Data integrity
Data and information owners overseen by the TMT (DLR, TRV, BT, CAF, SNCF and
DB) are responsible for the level of protection required against unauthorised
or accidental changes. An additional “quality” assurance is provided by
scientific peer reviews of articles and papers that come out of the project.
Information shall be protected against loss or damage until it is no longer
required to be retained for audits by the EU. Further keeping of records are
subject of negotiations with TRV and the party requesting such archiving.
## 5.4 Cyber security
All information flagged for archiving shall be screened for malware
infestation before entering the S2R collaboration tool repository.
# 6 File Naming Conventions
**6.1 Document code structure:**
All files irrespective of the data type shall be named in accordance with
Cooperation Tool document coding rules.
Whenever a new document is produced within the project, it must be uploaded on
the Cooperation Tool. When a document is uploaded, a unique document code is
assigned. The following subsections describe how this identification code is
structured and set up.
The identification code contains the six following sections:
[Project] – [Domain] – [Type] – [Owner] – [Number] – [Version]
where:
* [Project] is IMP2 for all IMPACT-2 documents;
* [Domain] is the relevant domain in the Cooperation Tool (WP, Task or project body);
* [Type] is one letter defining the document category;
* [Owner] is the trigram of the deliverable/document leader organisation;
* [Number] is an order number allocated by the Cooperation Tool when the document is first created.
* [Version] is the incremental version number, automatically incremented at each upload.
Examples:
<table>
<tr>
<th>
**Project**
**-**
**Code**
</th>
<th>
**Domain**
**-**
**(3-4 characters)**
</th>
<th>
**Type**
**-**
**(1 letter)**
</th>
<th>
**Owner**
**-**
**(3 letters)**
</th>
<th>
**Number**
**\- (3 digits)**
</th>
<th>
**Version**
**(2 digits)**
</th> </tr>
<tr>
<td>
IMP2 -
</td>
<td>
TMT -
</td>
<td>
B -
</td>
<td>
TRV -
</td>
<td>
001 -
</td>
<td>
01
</td> </tr>
<tr>
<td>
IMP2 -
</td>
<td>
SC -
</td>
<td>
T -
</td>
<td>
SIE -
</td>
<td>
002 -
</td>
<td>
03
</td> </tr>
<tr>
<td>
IMP2 -
</td>
<td>
WP1 -
</td>
<td>
P -
</td>
<td>
CAF -
</td>
<td>
003 -
</td>
<td>
02
</td> </tr> </table>
## 6.2 Documents type
When creating a document in the Cooperation Tool, the type of document in the
section “Specific Information” should be selected. This information will be
used to set up the identification code. Documents are classified among the
following types:
<table>
<tr>
<th>
**Letter**
</th>
<th>
**Name**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
A
</td>
<td>
Administrative
</td>
<td>
Any administrative document except contractual documents
</td> </tr>
<tr>
<td>
B
</td>
<td>
Meeting Agenda,
Presentation or
Minutes
</td>
<td>
Meeting Agenda, Presentation or Minutes
</td> </tr>
<tr>
<td>
C
</td>
<td>
Contractual document
</td>
<td>
Consortium Agreement, Grant Agreement and their approved amendments
</td> </tr>
<tr>
<td>
D
</td>
<td>
Deliverable
</td>
<td>
Deliverable identified as such under the Grant Agreement
</td> </tr>
<tr>
<td>
E
</td>
<td>
EC document
</td>
<td>
Document provided by JU (general rules, guidelines or JU experts documents)
</td> </tr>
<tr>
<td>
I
</td>
<td>
Internal
</td>
<td>
Document not to be circulated to outside the project
</td> </tr>
<tr>
<td>
J
</td>
<td>
Publication
</td>
<td>
Document accessible to the public
</td> </tr>
<tr>
<td>
M
</td>
<td>
Model (template)
</td>
<td>
MS-Office document templates
</td> </tr>
<tr>
<td>
P
</td>
<td>
Periodic Report
</td>
<td>
All intermediate/periodic reports except those listed as deliverables. May be
a WP intermediate report or a project intermediate report requested by the
Grant Agreement but not listed as deliverable.
</td> </tr>
<tr>
<td>
R
</td>
<td>
Deliverable Review
Sheet
</td>
<td>
Filled review sheet used to gather peer review comments on a deliverable. It
can be also used to comment any other internal document when explicitly agreed
or requested by its owner.
</td> </tr>
<tr>
<td>
S
</td>
<td>
Risk Sheet
</td>
<td>
Filled risk sheet
</td> </tr>
<tr>
<td>
T
</td>
<td>
Technical contribution
</td>
<td>
Technical document contributing to a task/deliverable but not part of the
deliverable
</td> </tr>
<tr>
<td>
W
</td>
<td>
Proposal
</td>
<td>
Proposal for changes to the Consortium Agreement or Grant Agreement
</td> </tr>
<tr>
<td>
X
</td>
<td>
External document
</td>
<td>
Document produced by non-members of the project (e.g. papers, reports,
external public deliverables…) that, upon authorisation of the author(s), are
shared with the project due to its relevancy.
</td> </tr> </table>
## 6.3 Document´s status
A status is associated to each step of the document lifecycle:
* Draft: the document is under development by one or several partners
* Under review: the document is made available within the project/WP/task for peer review
* Issued: the document is ready for submission to the JU
* Obsolete: the document is withdrawn (cancelled or superseded).
# 7 Conclusions
This report should be read in association with all the referenced documents
including the GA, CA and H2020 anointed model agreement, annexes and
guidelines.
The report will be subject to revisions as required to meet the evolving needs
of the IMPACT-2 project and will be formally reviewed at month 6 and 12 to
ensure it is fitting for purpose.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1424_OpenAIRE-Advance_777541.md
|
Disclaimer
This document contains description of the OpenAIRE-Advance project findings,
work and products. Certain parts of it might be under partner Intellectual
Property Right (IPR) rules so, prior to using its content please contact the
consortium head for approval.
In case you believe that this document harms in any way IPR held by you as a
person or as a representative of an entity, please do notify us immediately.
The authors of this document have taken any available measure in order for its
content to be accurate, consistent and lawful. However, neither the project
consortium as a whole nor the individual partners that implicitly or
explicitly participated in the creation and publication of this document hold
any sort of responsibility that might occur as a result of using its content.
This publication has been produced with the assistance of the European Union.
The content of this publication is the sole responsibility of the OpenAIRE-
Advance consortium and can in no way be taken to reflect the views of the
European Union.
OpenAIRE-Advance is a project funded by the
European Union (Grant Agreement No 777541).
# Publishable Summary
This document is a Data Management Plan (DMP) for data collected and created
by OpenAIRE. The DMP was developed in line with the Guidelines on FAIR Data
Management in Horizon 2020 (version 3.0). The OpenAIRE infrastructure operates
by collecting, transforming and enriching metadata of research outputs, and by
interacting with third-party data and service providers. OpenAIRE also runs
the open access research data repository Zenodo, and in the frames of OpenAIRE
various research activities are undertaken. All of these aspects of the
project are covered in the DMP. The DMP describes the size and scope of the
data as well as policies adopted for data archiving, preservation, and
sharing. It is intended to further develop this DMP and keep it up-to-date as
the project and its requirements develop.
1| DATA RELATED TO THE PROJECT
1.1 Data summary
During OpenAIRE-Advance project many documents and other data for internal
purpose are being produced. The value of them is different. Some of them are
needed for the only short period of time, while others will be valuable until
the end of the project or even longer. Essential project documentation will be
created as a result of tracking and validating project progress by the project
boards within WP1.
Access to documents and other data produced for internal purpose should be
limited to the project team. Crucial part of this outcome is used as a support
for the purpose of the project. This is the case of financial reports or notes
after the meetings. This kind of information should be preserved for the
adequate period of time. However, it could be considered that some documents
can be safely made accessible in a broader range. This could be the case of
white paper deliverable from the WP1. Some legal analyses may be used after
the project as well.
## 1.2 Resources and responsibilities
The responsibility for managing data related to project lies with all the
partners. This kind of data is related to all WPs. Leaders of each WP decide
how it is being managed.
## 1.3 Data security
All the project data will be used in the best way from the operational point
of view. Because of it’s practical usage it should be created and persisted
using comfortable online collaborative tools such as Google Spreadsheets or
Google Documents with custom access rights set for each document accordingly
to the requirements. Some internal data will be stored in the ICM
infrastructure in OwnCloud storage available at _https://box.openaire.eu_
with the ability to make it publicly available whenever possible.
## 1.4 Ethical and legal aspects
There is no specific ethical or legal aspects related to project data.
2| DATA RELATED TO NOAD OPERATION AND TRAINING
MATERIALS
2.1 Data summary
Various dissemination materials will be produced and published as an outcome
of WP2 and an electronic version should be easily exposable to public using
OwnCloud’s document sharing mechanisms. In addition to standard dissemination
materials such as leaflets, postcards, posters, factsheets and guides a number
of short stakeholder specific videos will be produced as general tutorials for
the OpenAIRE products.
WP3 focused around NOADs network operation will result in OA/OS policies
reinforcement and alignment in form of unified policy information which should
be made publicly available on the portal. Additionally two toolkits will be
provided alongside two deliverables:
* D3.1 with toolkit for policy makers on OA/OS
* D3.2 with toolkit for researchers on legal issues
Training activities will be strengthened with WP4 outcome in form of various
interactive training materials which in the end will contribute to Open
Research Data and OS carpentry courses. In order to meet T5.2.1 requirements
training materials should be also translated and adopted for use in Latin
America.
## 2.2 Resources and responsibilities
There are two main ways of gathering the data. Firstly it will be created
within the project by NOADs. Second source of the data is a result of
collaboration between OpenAIRE and other project and entities, such as FOSTER,
EOSC-hub and LA Referencia.
The responsibility for managing this category of the data lies with WP2, WP3
and WP4 leaders. The WPs leaders will be supported by NOADs. Standard
procedures for data management will be introduced by leader of each WP
relevant to this data.
## 2.3 Data security
All digital resources should be stored in the ICM infrastructure in OwnCloud
storage available at _https://box.openaire.eu_ with the ability to make it
publicly available. Some resources (e.g. policies) will become publicly
available on the portal. In both cases data security is ensured by the ICM
data center preservation policies.
## 2.4 Ethical and legal aspects
There are no specific ethical issues related to this category of the data.
Copyright belongs to the most important legal aspects in this subject. However
significant part of the materials will be licensed on the basis of Creative
Commons licenses. Other materials will be available on the basis of the fair
use principle.
3| DATA RELATED TO THE OPENAIRE INFORMATION SPACE AND PRODUCTS
3.1 Data summary
The OpenAIRE-Advance project runs and maintains a research-information
e-infrastructure which collects, manages, aggregates, cross-links and analyzes
data about the research output across all research repositories. We term this
the OpenAIRE Information Space. Access to the Information Space for all
interested Internet users is provided through the main OpenAIRE portal (
_www.openaire.eu_ ), dashboards and various APIs.
The software underlying the OpenAIRE infrastructure and portal is powered by
D-NET software, which is openly available under the AGPL license. Information
Inference Subsystem source code is publicly available on GitHub 1 . All
extensions to the software produced during OpenAIREAdvance are also being made
openly available under the same license.
The content of the Information Space (data populating it) is collected from
registered data providers (repositories, aggregators, CRISs, publishers,
funders) and from individual portal users, and it is also enriched by OpenAIRE
itself (through aggregation and inference algorithms). Bibliographic metadata
objects are in some cases complemented by corresponding fulltext files. The
resulting information has the form of an object/knowledge graph of all of the
metadata present in the system. The graph is generated by aggregating content
collected from data providers and portal users and enriching it with content
generated by the internal Information Inference System (IIS). The graph is
processed in an HBASE Column Store and delivered via APIs via varios back-
ends: Solr full-text index, MongoDB for OAI-PMH clients, Virtuoso for LOD, and
PostreSQL for statistics. The objects (the collected metadata records) conform
to the internal Information Space Data Model. The total size of the data
stored in the Information Space is now in the range of tens of TBs.
Further enrichment of Information Space content is expected through the pilots
defined in WP7 as a result of tight interoperability between OpenAIRE-Advance
and other external services. It is intended to support bidirectional data
exchange by not only absorbing the information from external services but also
sharing both aggregated and inferred Information Space contents. The
interfacing will be established with scholarly communication services such as
Sherpa ROMEO and DataCite, and with EC horizontal e-infrastructures: EOSC-hub,
EUDAT and EGI virtual appliances database. One another source of Information
Space data enrichment will be result of establishing interoperability between
OpenAIRE and Research Infrastructures services allowing publishing datasets
and methods being deposited and shared via RI services.
Information about the portal users is introduced by the users themselves
during account creation and then during portal usage.
Usage activity about the OpenAIRE portal and usage events from participating
repositories is collected by means of a Piwik platform and then analyzed to
produce consolidated usage statistics. No personal information is collected
here, apart from the IPs of the users who are visiting the web sites.
In the OpenAIRE-Advance project, we re-use and further build on software and
Information Space content from the previous projects in the series (OpenAIRE,
OpenAIREplus, OpenAIRE2020).
The software and data related to the OpenAIRE Information Space may be of
interest to everyone working in the area of open science, science management
and administration, or for other reasons interested in the functioning and
outputs of research.
OPENAIRE-ADVANCE PRODUCTS
### Users engagement and feedback
Each new OpenAIRE-Advance product will have one dedicated user board, every
6/8 months feedback will be taken from users outside OpenAIRE consortium. The
outcome can be obtained in various ways, e.g.:
* online tools for a quick feedback
* questionnaires
* interviews with specific stakeholders
Quick feedback responses should be gathered and stored with the means provided
by the chosen web framework.
Existing standard questionnaires for measuring usability and user experience
will be combined with questionnaires tailored to record feedback on the
specific services the platform offers. The outcome of questionnaires should be
persisted, e.g. in _box.openaire.eu_ , allowing easy access and further
processing.
Interviews can be conducted in form of semi-structured interviews or use
scenarios with users of OpenAIRE dashboards. Interviews outcome should result
in notes being stored in _box.openaire.eu_ .
### Monitoring Dashboards
The following set of dashboards will monitor and display research results for
the specific stakeholder group:
* Funder Dashboard
* Project Dashboard
* Institutional Dashboard
All monitoring dashboards will include standardized data spreadsheets and
graphs produced and exported in different formats on the fly, based on real
time data as aggregated by OpenAIRE.
They will also include tools to produce on demand reports.
Funder Dashboard was designed as a composition of two separate views:
* private view: Research Statistics Tool
* public view: Funder Dashboard / Portal
In order to increase impact through transparency funders will be encouraged to
share publicly (via URL) in the portal, all (or a selection) of the statistics
generated previously in the Research Statistics Tool. Very similar approach
was taken in Project Dashboard design.
### Dashboard for Content Providers
The dashboard for content providers target mainly repository managers and will
provide functionalities related with the registration and validation process
and display metadata enrichments and usage statistics information (views and
downloads) from each registered data source.
This dashboard will include standardized data spreadsheets and graphs produced
and exported in different formats on the fly, based on real time data as
aggregated by OpenAIRE, and will display the metadata records enriched via the
broker service. The dashboard for content providers was designed to provide
only a private view for registered repository managers.
### Other products
Gathering data management details related to other newly introduced OpenAIRE-
Advance products is still in progress mainly because of an embryonal state of
their design and development. DMP in accordance with its ongoing nature will
be kept up-to-date and will be supplemented with all the details as soon as
they will become tangible.
## 3.2 FAIR data
Findable data. Access to the Information Space data is provided through the
OpenAIRE portal, dedicated dashboards and various APIs. Due to the size and
scope of the project, the Information Space data follows an internal data
model inspired by the CERIF and DataCite data models. The description and
documentation of the Data Model is openly available on the OpenAIRE2020 wiki
2 .
The software powering OpenAIRE (D-NET) and its documentation are findable
through the D-NET website and through the standard set of metadata and DOI
that accompany the D-NET deposit in the Zenodo repository.
Accessible data. The content of the Information Space is available for
download and re-use with no restrictions or embargo. OpenAIRE-Advance provides
continuous access to the information space via standard APIs. These APIs
include OAI-PMH and HTTP REST search APIs ( _api.openaire.eu_ ) and a SPARQL
Linked Open Data entry point ( _lod.openaire.eu_ ). The data are also openly
available for bulk-download through OAI-PMH.
Data on portal usage are analyzed internally but not shared in raw form. Usage
statistics based on repository content views and downloads are publicly
available in the OpenAIRE portal and via a standardized API.
The full source code of the OpenAIRE D-NET software is stored at ISTI CNR 3
and can be requested there. A basic installation and toolkit is deposited in
GitHub and can be accessed there or through Zenodo 4 . IIS source code is
available on GitHub repository 5 .
Interoperable data. The content of the Information Space is stored in an
internal Data Model format which is inspired by the CERIF and DataCite
standards. Interoperability is supported by the provision of dedicated APIs
that allow to export the content in standard formats and standard data models.
Also, specific APIs will be provided on request to serve interoperability with
systems or infrastructures that are particularly relevant to OpenAIRE.
The source code follows standard software development procedures and contains
standard documentation.
Re-usable data. The content of the Information Space (the full OpenAIRE
knowledge graph) is available under a CC-BY license.
The D-NET source code and its extensions are all open under the AGPL license.
IIS is released under the Apache 2.0 license.
## 3.3 Resources and responsibilities
The responsibility for managing data related to the Information Space, portal
and dashboards lies with the partners: ICM, CNR, ARC and UNIBI. During the
project, the portal and Information Space data is being stored at the ICM data
centre. After the project ends, the Information Space and its content as well
as the D-NET software project will be managed by the OpenAIRE Legal Entity,
which will maintain and curate the data and software as long as it remains in
operation. In case the Legal Entity quits operation in the future, it will be
its responsibility to ensure that the data and software are transferred to a
further relevant institution, or to provide for the longterm archivization of
the data.
## 3.4 Data security
The OpenAIRE infrastructure and all of the Information Space content are
currently 24/7 maintained and operated at the ICM data centre. Preservation
and back-ups of the data are ensured by the ICM data centre preservation
policies: daily backups, security updates, tightly controlled administrative
access.
3 _http://www.d-net.research-infrastructures.eu/_ 4
_https://zenodo.org/record/168362_ 5
_https://github.com/openaire/iis_
## 3.5 Ethical and legal aspects
Legally, it is on the side of the data providers - who register their services
(e.g. data or publication repositories, journals) with OpenAIRE \- to ensure
that the transfer of metadata records from their services to OpenAIRE does not
violate intellectual property rights. OpenAIRE assumes the data providers make
sure that the metadata records are free to be processed, and that it is
therefore entitled to make the Information Space graph (which is based on
these records) openly available under the CC-BY license.
In accordance with legal restrictions, personal data concerning the portal
users are not shared in any manner.
4| DATA RELATED TO ZENODO
4.1 Data summary
The OpenAIRE-Advance project runs and maintains the publication-and-data
repository Zenodo. The repository is open to all researchers world-wide to
deposit their publications and data sets. Thus, the repository hosts content
that it preserves, but does not own any intellectual property rights over it.
Access to the Zenodo contents for all interested Internet users is provided
through the Zenodo user interface ( _zenodo.org_ ) and through appropriate
APIs (OAI-PMH and REST-API).
Software. Zenodo is powered by the Invenio repository platform, which is
available under the GPL license. All extensions to Zenodo produced during
OpenAIRE-Advance will also be made openly available under the same license.
The source code is stored in GitHub and can be accessed there 3 .
Content. Zenodo stores files of research publications and research datasets,
along with accompanying metadata. The files and metadata are provided by
individual repository users who voluntarily decide to store their research
output in Zenodo. The metadata is stored in JSON format 4 , the publication
and data files are in formats chosen by the persons depositing them (all file
formats are accepted). The total size of the data stored in Zenodo is now
151TB (38TB logical) with 1.2 million of files and 378 thousands of published
records.
Users. Information about the repository users is introduced by the users
during account creation and then by using the service. Zenodo does not track,
collect or retain personal information from users of Zenodo, except as
otherwise provided in the repository. In order to enhance Zenodo and monitor
traffic, Zenodo collects, stores and analyzes non-personal information about
its users, such as IP addresses and cookies and log files. This data is stored
and may be shared in aggregated form with other community services.
For Zenodo, we re-use and extend software prepared and data collected during
previous projects in the series (OpenAIRE, OpenAIREplus, OpenAIRE2020).
The data that populate Zenodo may be of interest to everyone interested in
research outputs. The software is of special interest to everyone who is
considering to run any kind of repository (in particular for scientific
institutions world-wide).
## 4.2 FAIR data
Findable data. Access to Zenodo content is provided through the user interface
and through standard APIs. Zenodo mints DOIs via DataCite for all deposited
uploads and thus metadata is also discoverable through DataCite’s APIs. The
metadata is discoverable thanks to standardized metadata export formats (in
DataCite v3.1, MARC21 and DublinCore).
The software powering Zenodo (Invenio with modifications) and its
documentation are findable through the Zenodo and CERN websites and in GitHub.
Accessible data. The metadata of all records stored in Zenodo is openly
available for download and re-use with no restrictions or embargo. It is
licensed CC0 and free for download – excluding bulk download of e-mail
addresses. The publication and data files are made available precisely in
accordance with the choice made by their depositors: ranging from openly
available on a CC0 license to closed access.
The Zenodo software and its documentation are accessible in GitHub.
Interoperable data. The metadata of Zenodo records follows multiple standard
formats (JSON, DataCite v3.1, DublinCore and MARC21) and is thus
interoperable. Interoperability is further enhanced by the API, which allows
to harvest the entire repository via the standard OAIPMH protocol. The
publication and data files are available in all possible formats and ensuring
their interoperability lies on the side of the depositors.
The source code is interoperable as it follows standard software development
procedures and Python community standards for code style and documentation and
contains standard documentation.
Re-usable data. The metadata of Zenodo records is licensed under CC0 and can
thus be downloaded and used without any restrictions, excluding bulk download
of e-mail addresses which is specifically prohibited. The files are licensed
in accordance with the precise choice of their depositors: ranging from openly
available on a CC0 or CC-BY or other license to restricted or closed access.
Since all intellectual property rights lie with the data producers
(depositors), Zenodo must respect these choices.
The Zenodo source code and its extensions are all open under the GPLv2 or
later version license.
## 4.3 Resources and responsibilities
The responsibility for managing data related to Zenodo lies with CERN. The
repository is hosted in the CERN Data Centre physically located on CERN
premises. After the project ends, Zenodo and its content will be managed
either by CERN or by the OpenAIRE Legal Entity; the responsible institution
will maintain and curate the data and software as long as it remains in
operation. In case the managing institution quits operation in the future, it
will be its responsibility to ensure that the data and software are
transferred to a further relevant institution or archived in an appropriate
manner.
## 4.4 Data security
Zenodo is maintained and operated by CERN. All data files are stored in CERN’s
EOS disk storage service in two independent copies, with each copy having two
replicas. Currently all file replicas reside in the same physical data centre,
but work is ongoing to keep one replica in CERN’s second data centre. MD5
checksums are computed for all files by both Invenio repository software and
EOS, and file integrity checks are regularly being performed.
Metadata is stored in a PostgreSQL database managed by CERNs Database Team
with 6-hourly backups to disk and weekly backups to tape.
Operational systems are managed and governed according to CERN security
policies, and CERN staff with access to operational systems and data (either
physical or remote) operates under CERN Operational Circular no. 5.
## 4.5 Ethical and legal aspects
Legally, the data depositors – who upload their research outputs to Zenodo -
agree to provide their metadata under the CC0 license, and actively specify
the legal status of the data files. It is therefore clear that Zenodo may make
all metadata openly available.
The terms on which portal users introduce their data during user registration
and while using the repository (searching, uploading) are regulated by the
Terms of Use.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1426_OPTIYARD_777594.md
|
# INTRODUCTION
The present Data Management Plan (DMP) details what data the project will
generate, whether and how it will be exploited or made accessible for
verification and re-use, and how it will be curated and preserved. This
document is to be considered in combination with:
* Section 9 “Access Rights” and Attachment 1 “Background included” of the Consortium Agreement, dealing with access rights and the use of the Workflow Tool.
* Section 3 “Rights and obligations related to background and results” of the Grant Agreement No. 777594, dealing with rights and obligations related to background and results.
The DMP is organised per Work Package (WP) to concretely describe the
contribution of each WP to the outcomes as well as the spin-off potential of
each activity.
To understand the data that the project will generate, a brief overview of the
project is given below:
To meet the needs of S2R and Horizon 2020, OptiYard will design optimised
processes for managing marshalling yards and terminals, considering their
interaction with the network. The processes considered are those that must be
performed in real-time, to guarantee on-time delivery and operational
efficiency, for single wagon transport.
OptiYard addresses critical operational points of the transport chain (both
rail marshalling yards or as transfer points to other modes) to improve
capacity and reliability. Most importantly, these improvements will enhance
competitiveness whilst increasing service reliability and customer
satisfaction by providing accurate and updated information. Real-time
interaction between yard and relevant network IT systems will allow for
software-based planning and ultimately optimisation of single wagonload and
blocktrain operational processes.
The lack of full integration between yard and network is a current weakness,
and one that increases as more real-time data becomes available, because we
will miss the opportunities to realise the benefits of such improved data.
Hence, there needs to be much more progress in developing the integration of
information systems and control systems between the yard systems and the
network systems. It is in this field where OptiYard offers the most exciting
possibilities.
Large rail freight marshalling yards are complex operations which present
major challenges to achieving operational efficiency, such that managing them
effectively even in a stand-alone mode is a challenging task, for which
sophisticated scheduling systems are required. The arrival and departure of
freight trains to/from the yard are closely linked to the operations of the
wider network eco system, making some of the yard operation processes
(shunting, marshalling and departing train dispatches) more time-critical than
others.
Thus, a key challenge to the future success of yard management lies in the
real-time information exchange between the yard and the relevant network eco
system, and the interactive responses between the yard
<table>
<tr>
<th>
and the network managements. With such information capabilities, yard
operations could be rescheduled at short notice to take account of
perturbations on the network, such as the delayed arrival of an incoming
freight train, allowing rapid re-optimisation of yard operations. Real-time
network information could also be used to identify more accurate departure
times for trains leaving yards, again allowing for beneficial rescheduling of
yard operations.
Hence, OptiYard develops a holistic approach to providing a real-time
decision-making framework for optimising yard operations and the yard-network
interface.
The technical WPs will address the following areas:
WP2 Data Analytics
WP3 Specification of the OptiYard Simulation Environment
WP4 Modelling
WP5 Process Optimisation
WP6 Business Cases - Feasibility & Simulation Tests
WP7 Dissemination, Communication and Results Exploitation
</th> </tr> </table>
# DATA MANAGEMENT AT PROJECT LEVEL
## DATA COLLECTION
Data exchange is crucial in this project to allow an efficient and reliable
connection between the yard and the surrounding network. For simulation
purposes, data will be stored and used, which makes necessary to define the
corresponding data management plan. In addition to this data used for the
technical aspects of the project, there is also a need for defining the rules
concerning data related to management and dissemination activities.
Each Work Package Leader is responsible for defining and describing all (non-
generic) data sets specific to their individual work package.
The WP leaders shall formally review the data sets related to their WP when
relevant and at least at time for first and second project periodic report to
the European Commission.
All modifications and additions to the DMP shall be provided to the OptiYard
Coordinator, UIC, for inclusion in the DMP.
## DATA ARCHIVING & PRESERVATION
A Workflow Tool platform was created to support the work of the consortium
members. For access to the website, users can register to the tool with a
valid e-mail address, need to choose a password and have then to validate a
link received by e-mail. Once done, the administrator of the workflow tool has
to validate the registration to the OPTIYARD workspace. Beneficiaries who do
not have access to the website can ask the Coordinator to open an account.
OptiYard Partners are strongly suggested to use the website to share project
information. The main functionality that should be used is the upload and
download of documents (contact list, deliverables, minutes of meetings,
agendas, presentations, Technical Annex of the Grant Agreement, etc.).
An instruction manual on how to use the Workflow Tool is circulated among
beneficiaries; the document is also accessible on the website (Tutorial for
all Project Members).
At the end of the project, when the OptiYard will be formally closed, all the
data material that has been collated or generated within the project and
registered on the Workflow Tool shall be copied and transferred to a digital
archive.
### Data Security & Integrity
The OptiYard project will be subject to the same levels of data security as
applied to normal operations for the Workflow Tool within UIC.
All data types that are uploaded to the Workflow Tool shall not be encrypted,
irrespective of whether these data items have been identified for future
archiving or not.
Members are granted with access rights in the Workflow Tool according to their
role in the project.
Rights apply to directories, calendars and documents. They define which parts
can be seen and which actions can be done and by which group(s). Rights are
therefore given to group(s) on objects like directories, calendars and
documents.
Every rights settings interface looks the same. Only the list of rights is
different. Below is an example of rights applied to directories. There are
rights to “view” the directory, “modify” it, “export” it, “assign” or
“unassign” a user to it. All members of the project have access to all
documents and meetings in the tool.
All partners contributing in a work package are given rights to create new
documents.
The Coordinator has overall project administration rights, enabling to
administrate the complete project document database.
When the Coordinator intends to modify a WP domain, he has an obligation to
inform relevant WP leaders about the changes he intends to bring to the
document database.
### Document Archiving
The document structure and type definition will be preserved as defined in the
document breakdown structure and work package groupings specified for the
Workflow Tool.
The process of archiving will be based on a data extract performed by UIC
within 12 weeks of the formal closure of the OptiYard project.
## FILE NAMING CONVENTIONS
Whenever a new document is produced within the project, it must be uploaded on
the Workflow Tool. Before a document is uploaded, a unique document code must
be assigned according to the following subsections that describe how this
identification code is structured and set up.
The identification code contains the six following sections:
**[Project] – [Domain] – [Type] – [Owner] – [Number] – [Filename]**
* [Project] is OptiYard for all OptiYard documents
* [Domain] is the relevant domain in the Workflow Tool (WP, Task or project body)
* [Type] is one letter defining the document category
* [Owner] is the trigram of the deliverable leader organisation
* [Number] is an order number allocated by the publisher when the document is first created • [Filename] is a short description of the document Examples:
For documents being circulated internally without having been uploaded on the
Workflow Tool first, there should be significance in the filename as follows:
the project name, WP name, type, partner sharing the document, and filename
should be mentioned (i.e. OptiYard_WP1_A_UIC_001_WBS).
Documents are classified among the following types:
## DATA & SHIFT2RAIL
The OptiYard deliverables and all other related generated data are
fundamentally linked to the future planned Shift2Rail project activity.
The data requirements of this DMP have been developed with the objective of
providing data structures that are uniform and not subject to possible future
ambiguous interpretation that will facilitate synergies.
Data shall be specifically selected for archiving based on the criteria that
it will be likely to be useful for future Shift2Rail activities.
# DMP OF WP1: MANAGEMENT
## DATA TYPES
Existing data used in this WP include the following data types:
<table>
<tr>
<th>
**Code**
</th>
<th>
**Description of**
**Dataset / Digital Output**
</th>
<th>
**Units and Format**
</th>
<th>
**Size**
</th>
<th>
**Ownership**
</th> </tr>
<tr>
<td>
OptiYard-1.1
</td>
<td>
Database of consortium partners: this database contains data such as name,
e-mail, company, telephone.
</td>
<td>
.xls
</td>
<td>
Evolutive depending on the updates of the mailing list.
</td>
<td>
The data will be kept in the UIC servers in accordance with the provisions of
Regulation (EU) 2016/679 OF THE EUROPEAN
PARLIAMENT AND OF THE
COUNCIL of 27 April 2016 on the protection of natural persons with regard to
the processing of personal data and on the free movement of such data, and
repealing Directive 95/46/EC (General Data Protection Regulation).
</td> </tr> </table>
**Table 1: Existing Data used in WP1**
No additional data are planned to be generated in this WP.
## STANDARDS, METADATA AND QUALITY ISSUES
No specific standards and metadata are planned to be used for data related to
WP1.
## DATA SHARING
<table>
<tr>
<th>
**Code**
</th>
<th>
**Data Sharing**
</th> </tr>
<tr>
<td>
OptiYard-1.2
</td>
<td>
This data is confidential and only the consortium partners will have access to
it.
</td> </tr> </table>
**Table 2: Data Sharing in WP1**
## ARCHIVING AND PRESERVATION
<table>
<tr>
<th>
**Code**
</th>
<th>
**Archiving and preservation**
</th> </tr>
<tr>
<td>
OptiYard-1.1
</td>
<td>
Data will be stored on the UIC server.
</td> </tr> </table>
**Table 3: Archiving and preservation of the data in WP1**
## DATA MANAGEMENT RESPONSIBILITIES
<table>
<tr>
<th>
**Code**
</th>
<th>
**Name of Responsible**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
OptiYard-1.1
</td>
<td>
Giancarlo DE MARCO TELESE (UIC)
</td>
<td>
Update and maintenance of the data
</td> </tr> </table>
**Table 4: Data Management Responsibilities in WP1**
# DMP OF WP2: DATA ANALYTICS
## DATA TYPES
Existing data used in this WP include the following data types:
<table>
<tr>
<th>
**Code**
</th>
<th>
**Description of**
**Dataset / Digital Output**
</th>
<th>
**Units and Format**
</th>
<th>
**Size**
</th>
<th>
**Ownership**
</th> </tr>
<tr>
<td>
RailData ISR
</td>
<td>
ISR, the IT service used by railway undertakings to exchange information about
wagons' movements
</td>
<td>
Data are used to enrich the
movement
information with the consignment data (transport description)
</td>
<td>
Large
</td>
<td>
http://www.raildata.coop/isr
</td> </tr>
<tr>
<td>
RNE (TIS)
</td>
<td>
The Train
Information System is a webbased application that supports international Train
Management by
delivering realtime train data concerning international (partly national)
passenger and
freight trains. The relevant data is obtained directly from the Infrastructure
Managers’ systems.
</td>
<td>
Fully TAF/TAP TSI-compliant
</td>
<td>
Large
</td>
<td>
http://tis.rne.eu/
</td> </tr>
<tr>
<td>
XML files
</td>
<td>
For data exchange in marshalling facilities. XML is a file extension for an
Extensible
Mark-up Language (XML) file format
used to create common
information formats and share both the format and the data on the World Wide
Web, intranets, and elsewhere
</td>
<td>
XML is similar to
HTML. Some
marshalling yards use this format for data exchange.
</td>
<td>
Large
</td>
<td>
A Marshalling yard/facility
</td> </tr>
<tr>
<td>
</td>
<td>
using standard ASCII text.
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
XSD Schema
file;
</td>
<td>
To keep file of operating
processes with
freight trains. A file with the XSD file extension is most likely an XML
Schema file;
</td>
<td>
A text-based file format that defines validation rules for an XML file and
explains the XML form. XML files can reference an XSD
file with the schema -
Location attribute.
</td>
<td>
Large
</td>
<td>
MYs, RUs and IMs
</td> </tr>
<tr>
<td>
railML
</td>
<td>
railML is published as a series of XML schemas holding subschemas, each of
which
encompasses a
particular field of
railway application:
* Common
concepts and objects, sometimes not mentioned separately;
* Timetable (TT);
* Rolling stock (RS);
* Infrastructure
(IS), both macroscopic and microscopic;
* Interlocking, from railML 3 on.
</td>
<td>
railML is a data exchange format developed by a consortium of railway
companies,
academic
institutions and consultancy firms.
</td>
<td>
Large
</td>
<td>
railML.org
</td> </tr> </table>
**Table 5: Existing Data used in WP2**
No data are planned to be generated in this WP.
## STANDARDS, METADATA AND QUALITY ISSUES
Data sources have been identified. For the purposes of WP 2 data has not been
generated.
## DATA SHARING
<table>
<tr>
<th>
**Code**
</th>
<th>
**Data Sharing**
</th> </tr>
<tr>
<td>
OptiYard-2.1
</td>
<td>
Information collected from public sources; used for the purposes of D2.1
</td> </tr> </table>
**Table 6: Data Sharing in WP2**
## ARCHIVING AND PRESERVATION
<table>
<tr>
<th>
**Code**
</th>
<th>
**Archiving and preservation**
</th> </tr>
<tr>
<td>
OptiYard-2.1
</td>
<td>
For internal use and the purposes of D2.1
</td> </tr> </table>
**Table 7: Archiving and preservation of the data in WP2**
## DATA MANAGEMENT RESPONSIBILITIES
<table>
<tr>
<th>
**Code**
</th>
<th>
**Name of Responsible**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
OptiYard-2.1
</td>
<td>
Marin Marinov, UNEW
</td>
<td>
Data identified is not confidential
</td> </tr> </table>
**Table 8: Data Management Responsibilities in WP2**
# DMP OF WP3: SPECIFICATION OF THE OPTIYARD SIMULATION ENVIRONMENT
## DATA TYPES
WP3 sets out and describes the technical and functional specifications of the
models for yards and networks, and as such does not utilise formal data sets,
or produce formal datasets for subsequent work packages.
## STANDARDS, METADATA AND QUALITY ISSUES
WP3 discussed issues to address in developing a data management interface for
real-time yard and network management system. The key elements identified are
summarised below:
* Real-time animation facilitated by flexible simulation tools
* Clear specification of metadata sets
* Online reporting on the productivity of the rail freight system
* IT security in accordance with ISO 27001:2013
* Documentation of performance monitoring and updates
* Search facilities for metadata inspection and interrogation
* Common data structure and interface, for data portability
* Common specification for real time management interface
## DATA SHARING
Any identified data will be shared via the UIC server.
## ARCHIVING AND PRESERVATION
Any identified data will be stored via the UIC server.
## DATA MANAGEMENT RESPONSIBILITIES
Not applicable.
# DMP OF WP4: MODELLING
## DATA TYPES
Existing data used in this WP include the following data types:
<table>
<tr>
<th>
**Code**
</th>
<th>
**Description of**
**Dataset / Digital Output**
</th>
<th>
**Units and Format**
</th>
<th>
**Size**
</th>
<th>
**Ownership**
</th> </tr>
<tr>
<td>
OptiYard-4.1
</td>
<td>
Trieste Campo Marzio terminal track layouts
</td>
<td>
jpeg pdf
dwg
</td>
<td>
10 MB
</td>
<td>
Adriafer
Confidential
</td> </tr>
<tr>
<td>
OptiYard-4.2
</td>
<td>
Trieste Campo Marzio schematic track diagrams with signalling
</td>
<td>
Tiff pdf
</td>
<td>
10 MB
</td>
<td>
Adriafer
</td> </tr>
<tr>
<td>
OptiYard-4.3
</td>
<td>
Trieste Campo Marzio CAD drawing
</td>
<td>
dwg
</td>
<td>
2 MB
</td>
<td>
Adriafer
Confidential
</td> </tr>
<tr>
<td>
OptiYard-4.4
</td>
<td>
Trieste Campo Marzio surrounding
railway network
maps
</td>
<td>
pdf
</td>
<td>
10 MB
</td>
<td>
Adriafer
</td> </tr>
<tr>
<td>
OptiYard-4.5
</td>
<td>
Trieste Campo Marzio anonymised train schedules
</td>
<td>
pdf
</td>
<td>
100 kB
</td>
<td>
Adriafer
</td> </tr>
<tr>
<td>
OptiYard-4.6
</td>
<td>
Ceska Trebova terminal track layouts
</td>
<td>
dwg
</td>
<td>
500 kB
</td>
<td>
CD Cargo/Oltis confidential
</td> </tr>
<tr>
<td>
OptiYard-4.7
</td>
<td>
Ceska Trebova schematic track diagrams with signalling
</td>
<td>
xlsx
</td>
<td>
1.5 MB
</td>
<td>
CD Cargo
</td> </tr>
<tr>
<td>
OptiYard-4.8
</td>
<td>
Ceska Trebova CAD drawing
</td>
<td>
dwg
</td>
<td>
500 kB
</td>
<td>
CD Cargo/Oltis confidential
</td> </tr>
<tr>
<td>
OptiYard-4.9
</td>
<td>
Ceska Trebova surrounding railway network maps
</td>
<td>
various formats
</td>
<td>
10 MB
</td>
<td>
CD Cargo
</td> </tr>
<tr>
<td>
OptiYard-4.10
</td>
<td>
Ceska Trebova anonymised train schedules
</td>
<td>
various formats
</td>
<td>
1 MB
</td>
<td>
CD Cargo
</td> </tr>
<tr>
<td>
OptiYard-4.11
</td>
<td>
Ceska Trebova anonymised wagon lists for inbound trains
</td>
<td>
xlsx
</td>
<td>
1 MB
</td>
<td>
CD Cargo/Oltis confidential
</td> </tr> </table>
**Table 9: Existing Data used in WP4**
Data generated in this WP include the following types:
<table>
<tr>
<th>
**Code**
</th>
<th>
**Description of**
**Dataset / Digital Output**
</th>
<th>
**Units and Format**
</th>
<th>
**Size**
</th>
<th>
**Ownership**
</th> </tr>
<tr>
<td>
OptiYard-4.12
</td>
<td>
Ceska Trebova Yard Model (nonoptimized operation)
</td>
<td>
Executable
(installation package)
</td>
<td>
50 MB
</td>
<td>
SIMCON
</td> </tr>
<tr>
<td>
OptiYard-4.13
</td>
<td>
Ceska Trebova Yard simulation results (nonoptimized operation)
</td>
<td>
Results are part of the simulation model with stored simulation protocol from
simulation run
(included in installation package)
</td>
<td>
see executable with yard model
</td>
<td>
SIMCON/CD Cargo
confidential
</td> </tr>
<tr>
<td>
OptiYard-4.14
</td>
<td>
Trieste Yard Model (non-optimized operation)
</td>
<td>
Executable
(installation package)
</td>
<td>
50 MB
</td>
<td>
SIMCON
</td> </tr>
<tr>
<td>
OptiYard-4.15
</td>
<td>
Trieste Yard simulation results (non-optimized operation)
</td>
<td>
Results are part of the simulation model with stored simulation protocol from
simulation run
(included in installation package)
</td>
<td>
see executable with yard model
</td>
<td>
SIMCON/Adriafer confidential
</td> </tr>
<tr>
<td>
OptiYard-4.16
</td>
<td>
Ceska Trebova Network Model
</td>
<td>
executable
</td>
<td>
1 MB
</td>
<td>
LEEDS
</td> </tr>
<tr>
<td>
OptiYard-4.17
</td>
<td>
Ceska Trebova Network simulation results
</td>
<td>
various formats
</td>
<td>
50 MB
</td>
<td>
LEEDS
</td> </tr>
<tr>
<td>
OptiYard-4.18
</td>
<td>
Trieste Network Model
</td>
<td>
executable
</td>
<td>
1 MB
</td>
<td>
LEEDS
</td> </tr>
<tr>
<td>
OptiYard-4.19
</td>
<td>
Trieste Network simulation results
</td>
<td>
various formats
</td>
<td>
50 MB
</td>
<td>
LEEDS
</td> </tr> </table>
**Table 10: Data Generated in WP4**
## STANDARDS, METADATA AND QUALITY ISSUES
All the data produced within this work package will following the rules set in
chapters 2.2 and 2.3.
## DATA SHARING
<table>
<tr>
<th>
**Code**
</th>
<th>
**Data Sharing**
</th> </tr>
<tr>
<td>
OptiYard-4.20
</td>
<td>
The identified data is shared via the UIC server.
</td> </tr> </table>
**Table 11: Data Sharing in WP4**
## ARCHIVING AND PRESERVATION
<table>
<tr>
<th>
**Code**
</th>
<th>
**Archiving and preservation**
</th> </tr>
<tr>
<td>
OptiYard-4.21
</td>
<td>
Data is stored on the UIC server.
</td> </tr> </table>
**Table 12: Archiving and preservation of the data in WP4**
## DATA MANAGEMENT RESPONSIBILITIES
<table>
<tr>
<th>
**Code**
</th>
<th>
**Name of Responsible**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
OptiYard-4.22
</td>
<td>
Miloš Zaťko (SIMCON)
</td>
<td>
Update and maintenance of nonoptimized yard simulation models
</td> </tr> </table>
**Table 13: Data Management Responsibilities in WP4**
# DMP OF WP5: PROCESS OPTIMISATION
## DATA TYPES
Data generated in this WP include the following types:
<table>
<tr>
<th>
**Code**
</th>
<th>
**Description of**
**Dataset / Digital Output**
</th>
<th>
**Units and Format**
</th>
<th>
**Size**
</th>
<th>
**Ownership**
</th> </tr>
<tr>
<td>
OptiYard-5.x
</td>
<td>
Ceska Trebova Yard Model with interface to
optimization module
</td>
<td>
Executable
(installation package)
</td>
<td>
50 MB
</td>
<td>
SIMCON
</td> </tr>
<tr>
<td>
OptiYard-5.x
</td>
<td>
Trieste Yard Model with interface to
optimization module
</td>
<td>
Executable
(installation package)
</td>
<td>
50 MB
</td>
<td>
SIMCON
</td> </tr> </table>
## STANDARDS, METADATA AND QUALITY ISSUES
We use the XML standard fixed by the following (and typical) code:
<?xml version="1.0" encoding="UTF-8"?>
<xs:schema xmlns:xs=" _http://www.w3.org/2001/XMLSchema_ "
elementFormDefault="qualified">
All the data of our XML data model follows the W3C standard.
We use two categories of files. The metadata of the first one gathers the
static data (e.g., yard layout, signals, available operations). The second
file groups the data which can change over time (e.g., trains/cars, yard
locomotive).
The following quality issues will be handled:
We obtain a standardized XML model with the software Oxygen XML Editor. This
editor integrates the latest version of the Xerces-J XML parser to validate
documents against XML Schemas, i.e., the documents are "Well-Formed", and also
conform to the rules of a Document Type Definition (DTD), XML Schema, or other
type of schema that defines the structure of an XML document.
We declare all the types and structures data through the software (lists,
lists of lists etc.). As a result, we obtain a normalized schema, saved in a
'.xsd file". The same software generates automatically all the documentation
explaining in detail the different metadata used for the two types of file.
## DATA SHARING
<table>
<tr>
<th>
**Code**
</th>
<th>
**Data Sharing**
</th> </tr>
<tr>
<td>
OptiYard-5.x
</td>
<td>
The identified data is shared via the UIC server.
</td> </tr> </table>
## ARCHIVING AND PRESERVATION
<table>
<tr>
<th>
**Code**
</th>
<th>
**Archiving and preservation**
</th> </tr>
<tr>
<td>
OptiYard-5.x
</td>
<td>
Data is stored on the UIC server.
</td> </tr> </table>
## DATA MANAGEMENT RESPONSIBILITIES
<table>
<tr>
<th>
**Code**
</th>
<th>
**Name of Responsible**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
OptiYard-5.x
</td>
<td>
Miloš Zaťko (SIMCON)
</td>
<td>
Update and maintenance of yard simulation models with interface to
optimization module
</td> </tr> </table>
# DMP OF WP6: BUSINESS CASES - FEASIBILITY & SIMULATION TESTS
## DATA TYPES
Existing data used in this WP include the following data types:
<table>
<tr>
<th>
**Code**
</th>
<th>
**Description of**
**Dataset / Digital Output**
</th>
<th>
**Units and Format**
</th>
<th>
**Size**
</th>
<th>
**Ownership**
</th> </tr>
<tr>
<td>
Optiyard-6.1
</td>
<td>
Mainly input from the other WP
</td>
<td>
Mainly Word and Excel files
</td>
<td>
Variable
</td>
<td>
Respective WP leaders
</td> </tr>
<tr>
<td>
OptiYard-6.2
</td>
<td>
Mainly input from the other WP
</td>
<td>
Mainly Word and
Excel files
</td>
<td>
Variable
</td>
<td>
Respective WP leaders
</td> </tr>
<tr>
<td>
OptiYard 6.3
</td>
<td>
Mainly input from the other WP
</td>
<td>
Mainly Word and
Excel files
</td>
<td>
Variable
</td>
<td>
Respective WP leaders
</td> </tr> </table>
**Table 14: Existing Data used in WP6**
Data generated in this WP include the following types:
<table>
<tr>
<th>
**Code**
</th>
<th>
**Description of**
**Dataset / Digital Output**
</th>
<th>
**Units and Format**
</th>
<th>
**Size**
</th>
<th>
**Ownership**
</th> </tr>
<tr>
<td>
Optiyard-6.1
</td>
<td>
Agenda, minutes, reports, slides…
</td>
<td>
Word, Excel and PPT files
</td>
<td>
Variable
</td>
<td>
WP leader (UIRR°
</td> </tr>
<tr>
<td>
OptiYard-6.2
</td>
<td>
Agenda, minutes, reports, slides…
</td>
<td>
Word, Excel and PPT files
</td>
<td>
Variable
</td>
<td>
WP leader (UIRR)
</td> </tr>
<tr>
<td>
OptiYard 6.3
</td>
<td>
Agenda, minutes, reports, slides…
</td>
<td>
Word, Excel and PPT files
</td>
<td>
Variable
</td>
<td>
WP leader (UIRR)
</td> </tr> </table>
**Table 15: Data Generated in WP6**
## STANDARDS, METADATA AND QUALITY ISSUES
All the data produced within this work package will following the rules set in
chapters 2.2 and 2.3.
## DATA SHARING
<table>
<tr>
<th>
**Code**
</th>
<th>
**Data Sharing**
</th> </tr>
<tr>
<td>
Optiyard-6.1
</td>
<td>
Data will only be shared with the project partners. Sharing with third parties
will only be possible with prior consent of the SMC.
</td> </tr>
<tr>
<td>
OptiYard-6.2
</td>
<td>
Sharing with third parties will only be possible with prior consent of the
SMC.
</td> </tr>
<tr>
<td>
OptiYard 6.3
</td>
<td>
Sharing with third parties will only be possible with prior consent of the
SMC.
</td> </tr> </table>
**Table 16: Data Sharing in WP6**
## ARCHIVING AND PRESERVATION
<table>
<tr>
<th>
**Code**
</th>
<th>
**Archiving and preservation**
</th> </tr>
<tr>
<td>
Optiyard-6.1
</td>
<td>
Stored and archived until the end of the project on the UIC server – after the
project lifetime: at least until the official end of the period of an
internal/external audit
</td> </tr>
<tr>
<td>
OptiYard-6.2
</td>
<td>
Stored and archived until the end of the project on the UIC server – after the
project lifetime: at least until the official end of the period of an
internal/external audit
</td> </tr>
<tr>
<td>
OptiYard 6.3
</td>
<td>
Stored and archived until the end of the project on the UIC server – after the
project lifetime: at least until the official end of the period of an
internal/external audit
</td> </tr> </table>
**Table 17: Archiving and preservation of the data in WP6**
## DATA MANAGEMENT RESPONSIBILITIES
<table>
<tr>
<th>
**Code**
</th>
<th>
**Name of Responsible**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
Optiyard-6.1
</td>
<td>
Eric FEYEN (UIRR)
</td>
<td>
Storage and maintenance of all related data for task 6.1
</td> </tr>
<tr>
<td>
OptiYard-6.2
</td>
<td>
Eric FEYEN (UIRR)
</td>
<td>
Storage and maintenance of all related data for task 6.2
</td> </tr>
<tr>
<td>
OptiYard 6.3
</td>
<td>
Armand CARILLO (EURNEX)
</td>
<td>
Storage and maintenance of all related data for task 6.3
</td> </tr> </table>
**Table 18: Data Management Responsibilities in WP6**
# DMP OF WP7: DISSEMINATION, COMMUNICATION AND RESULTS EXPLOITATION
## DATA TYPES
Existing data used in this WP include the following data types:
<table>
<tr>
<th>
**Code**
</th>
<th>
**Description of**
**Dataset/Digital Output**
</th>
<th>
**Units and Format**
</th>
<th>
**Size**
</th>
<th>
**Ownership**
</th> </tr>
<tr>
<td>
OptiYard-7.1
</td>
<td>
Images: Images and logos from partners participating in the project.
</td>
<td>
.eps, .ai,
.png,
.jpeg
</td>
<td>
Variable
</td>
<td>
The owner gives permission to UIC to use images for dissemination purposes of
OptiYard.
</td> </tr>
<tr>
<td>
OptiYard-7.2
</td>
<td>
Database of Advisory Board: this database contains data such as name, e-mail,
company, telephone and field of expertise of the partners participating in the
Advisory Board.
</td>
<td>
.xls
</td>
<td>
Evolutive depending on the updates of
the mailing list.
</td>
<td>
The data will be kept in the UIC servers in accordance with the provisions of
Regulation (EU) 2016/679 OF THE
EUROPEAN PARLIAMENT AND OF
THE COUNCIL of 27 April 2016 on the protection of natural persons with regard
to the processing of personal data and on the free movement of such data, and
repealing Directive 95/46/EC
(General Data Protection Regulation).
</td> </tr> </table>
**Table 19: Existing Data used in WP7**
No specific data is planned to be generated in this work package.
## STANDARDS, METADATA AND QUALITY ISSUES
The pictures and logos are stored in common formats: vector image formats and
picture compression standards.
## DATA SHARING
<table>
<tr>
<th>
**Code**
</th>
<th>
**Data Sharing**
</th> </tr>
<tr>
<td>
OptiYard-7.1
</td>
<td>
The data will not be shared but some of the image database will be used for
dissemination purposes and therefore will become public.
</td> </tr>
<tr>
<td>
OptiYard-7.2
</td>
<td>
This data is confidential and only the consortium partners will have access to
it.
</td> </tr> </table>
**Table 20: Data Sharing in WP7**
## ARCHIVING AND PRESERVATION
<table>
<tr>
<th>
**Code**
</th>
<th>
**Archiving and preservation**
</th> </tr>
<tr>
<td>
OptiYard-7.1 and 7.2
</td>
<td>
Data will be stored on the UIC server.
</td> </tr> </table>
**Table 21: Archiving and preservation of the data in WP7**
## DATA MANAGEMENT RESPONSIBILITIES
<table>
<tr>
<th>
**Code**
</th>
<th>
**Name of Responsible**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
OptiYard-7.1 and 7.2
</td>
<td>
Giancarlo DE MARCO TELESE
(UIC)
</td>
<td>
Update and maintenance of the data
</td> </tr> </table>
**Table 22: Data Management Responsibilities in WP7**
# CONCLUSION
The purpose of the Data Management Plan is to support the data management life
cycle for all data that will be collected, processed or generated by the
OptiYard project. The DMP is not a fixed document but evolves during the
lifespan of the project. This document is expected to mature during the
project; more developed versions of the plan could be included as additional
revision of this deliverable at later stages. The DMP will be updated at least
after the mid-term and final reviews to fine-tune it to the data generated and
the uses identified by the consortium since not all data or potential uses are
defined at this stage of the project.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1427_e.THROUGH_778045.md
|
# DATA SUMMARY
What is the purpose of the data collection/generation and its relation to the
objectives of the project?
The project e.THROUGH wants to help to improve availability of qualified and
skilled workforce leading to higher competitiveness of the EU CRM industry.
e.THROUGH will contribute to improving transparency and communication in
general by looking at multidisciplinary oriented research on mining
development.
The specific objectives of the project are:
_O.1_ To promote new trends in the characterization and exploration of mineral
deposits, in order to link planning with the sustainable development of
mineral deposits both regional, nationally and globally;
_O.2_ To map (C)RMs between EU mining regions, identifying synergies between
(C)RM value chains, market and societal stakeholders and also operational
synergies between R&I investments within and across the regions;
_O.3_ To gain fundamental and applied knowledge on emerging technologies for
recovery secondary CRMs (e.g.
W, Ga, In and Cr) with value for industry, from mine tailings and other
fluxes;
_O.4_ To redesign structures and construction materials using secondary
materials, closing loops, strongly supporting waste minimization in society;
_O.5_ Life Cycle Assessments (LCA) for the evaluation of global environmental
impacts, comparing new methods with existing technologies, as well as
redesigned products with conventional ones, aiming to monitor progress in
sustainable production and consumption;
_O.6_ To transfer newly generated knowledge to stakeholders, contributing for
policy development and standardization, as well as for shaping responsible and
sustainable behaviours.
So, taking into account the rich region-specific geographies of the
consortium, a great deal of knowledge will be shared by both Early Stage
Researchers (ESR) and Experienced Researchers (ER), who will interact during
the project, contributing to understand what has been conducive and what
hampering to the development of extractive and metallurgical industries. It
will also provide the context for a bottom-up integration of these activities
into their respective interdisciplinary fields, for sure enriching this
skilled workforce, leading to higher competitiveness of the EU CRM domain.
e.THROUGH will build and extend the knowledge base by exchanging experience
and mutual learning among intersectoral and multidisciplinary partners, which
will provide new learning opportunities, using existing infrastructure and
available research facilities. Partners depicting e.THROUGH programme of
innovation activities and knowledge transfer capitalize on past projects from
the consortium (e.g. FP7 project HydroWEEE-DEMO, LIFE project PHOTOLIFE, H2020
MSP – REFRAMMulti-Stakeholder Platform for a Secure Supply of Refractory
Metals in Europe), and ongoing projects (e.g., INTERREG Europe REMIX – Smart
and Green Mining Regions of EU, MIREU – Mining and Metallurgy Regions of EU
(H2020-SC5-2017-776811), European Institute of Innovation and Technology (EIT)
RM TravelEx -
Underground Resources Travelling Exhibition).
The generated scientific relevant knowledge will also be transferred to
relevant stakeholders with the special focus on mining regions, in order to
help shape individual citizens’ behaviour. Two stakeholder workshops will take
place, in combination with the ongoing REMIX project (the first at month 12,
in Fundão, Portugal).
What types and formats of data will the project generate/collect?
e.THROUGH WP5 will provide support on Communication and Dissemination, in
delivering knowledge gained during the project. This knowledge will be semi-
structured (e.g., formatted datasets) or non-structured (documents, graphs,
tables, pictures, etc.), and properly indexed in order to be retrievable and
exploitable.
Concerning the type of data to be produced, it is split among reports,
articles, databases, and a photo album.
Most of the data will be in pdf format followed by .docx, xlsx and then
jpeg/avi/ wmv and similar audio/video formats.
Will you re-use any existing data and how?
Some existing datasets are proposed to be used, such as ProMine and
OneGeology. Furthermore, e.THROUGH will benefit from developments for other
past and on-going H2020 projects (MIREU, REMIX, etc.) (see first answer), in
order to maximize efficiency of EC funding.
What is the origin of the data?
See above question.
What is the expected size of the data?
N/A
To whom might it be useful ('data utility')?
Public and private organizations; (2) Academia and (3) General public. In
fact, to all stakeholders who are interested to facilitate exchange of
knowledge with critical raw materials production and recovering capacities,
including:
* individual EU citizens searching for information on raw materials,
* mining companies carrying out exploration or planning on investing in EU,
* relevant competent authorities, private sector, research and academic organisations, civil society and local communities, and experts,
* political decision makers on matters related to mineral raw materials both EU, regional and national
level,
* NGO´s,
2\. FAIR DATA
# 1\. Making data findable, including provisions for metadata
Are the data produced and/or used in the project discoverable with metadata,
identifiable and locatable by means of a standard identification mechanism
(e.g. persistent and unique identifiers such as Digital Object Identifiers)?
The vast majority is open access data, although some is confidential and
restricted. The data users will be able to locate the original data sources,
as links to these web services will be included in deliverables and the
project website.
What naming conventions do you follow?
European Minerals Knowledge Data Platform (EU-MKDP) service Registry naming
conventions.
Will search keywords be provided that optimize possibilities for re-use?
In the project website, keywords will be used as tags to ensure that search
engines will easily find them. Pertinent applications (e.g., thematic
searches, statistics, etc.) will be defined, in order to facilitate diffusion
of knowledge and ease networking and exchange of information.
Do you provide clear version numbers?
Clear version numbers will be provided, whenever the deliverable/data is
planned to be updated.
What metadata will be created? In case metadata standards do not exist in your
discipline, please outline what type of metadata will be created and how.
In e.THROUGH metadata will be created following the INSPIRE directive in all
possible instances.
## Making data openly accessible
Which data produced and/or used in the project will be made openly available
as the default? If certain datasets cannot be shared (or need to be shared
under restrictions), explain why, clearly separating legal and contractual
reasons from voluntary restrictions.
All data produced/used will be available at least for viewing. Some of the
data will be available for downloading, for example the public deliverables.
It may not be possible to enable downloading of all of the data due to
copyright and IP issues. E.g. part of the data is owned by the partners in
each country.
How will the data be made accessible (e.g. by deposition in a repository)?
Data (e.g. generated papers) will be made accessible by deposition in a
repository.
What methods or software tools are needed to access the data? Is documentation
about the software needed to access the data included? Is it possible to
include the relevant software (e.g. in open source code)? Is there a need for
a data access committee?
Data can be accessed through internet without need for special software,
standard free internet browser (e.g. Google Chrome) and pdf reader will be
sufficient. As the data can be viewed and accessed with standard tools there
will be no need for specific software documentation or specific software to be
downloadable with the data. Neither is there any need to set up a data access
committee.
Where will the data and associated metadata, documentation and code be
deposited? Preference should be given to certified repositories which support
open access where possible. Have you explored appropriate arrangements with
the identified repository?
The guidelines for achieving INSPIRE compliance, including recommendations for
metadata population, will be used.
If there are restrictions on use, how will access be provided? Are there well
described conditions for access (i.e. a machine readable license)? How will
the identity of the person accessing the data be ascertained?
There will be no restrictions to access the data for viewing.
## Making data interoperable
Are the data produced in the project interoperable, that is allowing data
exchange and re-use between researchers, institutions, organisations,
countries, etc. (i.e. adhering to standards for formats, as much as possible
compliant with available (open) software applications, and in particular
facilitating re-combinations with different datasets from different origins)?
e.THROUGH produced data will be made fully compatible with (i) the Minerals4EU
European Minerals Knowledge Data Platform (EU-MKDP, see
http://minerals4eu.brgm-rec.fr/), and (ii) the JRC Raw Materials Information
System 2.0 (RMIS 2.0), depending on its specifications (unknown yet), in order
to facilitate the link between the platforms.
What data and metadata vocabularies, standards or methodologies will you
follow to make your data interoperable?
The data on mining regions will be interoperable, e.g. because standard
lexicon on mineral deposits has been defined and further developed during
previous projects such as ProMine and Minerals4EU.
Will you be using standard vocabularies for all data types present in your
data set, to allow inter-disciplinary interoperability?
Standard vocabularies will be used whenever possible. All prodeced knowledge
will be delivered in supports of various types, either semi-structured (e.g.
datasets) or non-structured (e.g. reports, abstracts, charts, graphs, etc.).
In case it is unavoidable that you use uncommon or generate project specific
ontologies or vocabularies, will you provide mappings to more commonly used
ontologies?
If there is a need to create e.THROUGH specific ontologies or vocabularies,
mappings to more commonly used ontologies will be provided.
## Increase data re-use (through clarifying licences)
How will the data be licensed to permit the widest re-use possible? When will
the data be made available for re-use? If an embargo is sought to give time to
publish or seek patents, specify why and how long this will apply, bearing in
mind that research data should be made available as soon as possible.
The consortium agreement signed inter alia specifies the terms and conditions
pertaining to ownership, access rights, exploitation of background and results
and dissemination of results, in compliance with the grant agreement and
Regulation n°1290/2013 of December 11 th , 2013.
Results of the e.THROUGH project will be made publicly available, free of
charge.
Information exchanged during closed consultation with external participants
(e.g. during the two Stakeholder workshops) will be subject to a Non-
Disclosure Agreement. The classification level of the information to be shared
will be defined be the Steering Group (public or restricted) and monitored by
the Project Coordinator (WP1).
The data used in e.THROUGH can be freely viewed and the public reports and
training material produced can be freely downloaded. There will be no licences
concerning the data.
Are the data produced and/or used in the project useable by third parties, in
particular after the end of the project? If the re-use of some data is
restricted, explain why. How long is it intended that the data remains
reusable?
e.THROUGH website and the data portals serving the data will be in operation
also after the end of the project. The data portal is intended to be more or
less permanent services, however it is also possible that they could be
replaced by a comparable but more advanced service in the future.
Are data quality assurance processes described?
Yes.
# ALLOCATION OF RESOURCES
What are the costs for making data FAIR in your project? How will these be
covered? Note that costs related to open access to research data are eligible
as part of the Horizon 2020 grant (if compliant with the Grant Agreement
conditions).
e.THROUGH project has been designed so that the data is FAIR (findable,
accessible, interoperable and reusable), but it is not possible to express in
numbers what is the cost of that.
Who will be responsible for data management in your project?
In e.THROUGH, NOVA as the Coordinator has the main responsibility of data
management, however the decisions will be made in the Steering Committee.
Are the resources for long term preservation discussed (costs and potential
value, who decides and how what data will be kept and for how long)?
The e.THROUGH website will be freely accessible on the internet and hosted on
FCT NOVA servers during the lifetime of the project (note that a perennial
solution for hosting it beyond the lifetime of the project will have to be
sought).
# DATA SECURITY
What provisions are in place for data security (including data recovery as
well as secure storage and transfer of sensitive data)? Is the data safely
stored in certified repositories for long term preservation and curation?
The servers serving the data are secured by standard means such as firewalls
and automatic backing up. Data in servers can be recovered by standard
recovery procedures. The project does not deal with sensitive data; any
personal information collected during the project will be made anonymous by
the end of the project.
# ETHICAL ASPECTS
Are there any ethical or legal issues that can have an impact on data sharing?
These can also be discussed in the context of the ethics review. If relevant,
include references to ethics deliverables and ethics chapter in the
Description of the Action (DoA).
The data in the data portals is not sensitive in any way and does not contain
any personal information. The IP rights are respected by acknowledging data
sources. Public project deliverables contain names and organisational
information of some of the people involved in producing the deliverable, but
people are informed and publishing that personal information does not create
any risks to anyone.
Is informed consent for data sharing and long term preservation included in
questionnaires dealing with personal data?
In the case there will be questionnaires, used for collecting information on
data, there will be an option to fill in the questionnaire anonymously. All
replies will be made anonymous after the information collected has been
analysed, at latest by the end of the project.
In e.THROUGH, Lyyti registration system (or a similar kind) will be used to
collect personal information to NOVA´s register of participants of events. It
can be used for registering participants of stakeholder events and internal
meetings. As the EU legislation on personal information came into effect on 25
May 2018, it is not possible to submit Lyyti registration forms without
accepting the privacy policy statement, which describes in detail how the
submitted personal information will be managed. All information on the
participants will be removed from the register at the latest after one month
after the end of each event. In addition, the privacy policy statement
explains that names and organisational information of participants may be
permanently stored in reports describing each event.
# OTHER ISSUES
Do you make use of other national/funder/sectorial/departmental procedures for
data management? If yes, which ones?
Not now.
# FURTHER SUPPORT IN DEVELOPING YOUR DMP
* The Research Data Alliance (see https://www.rd-alliance.org) provides a _Metadata Standards_
_Directory_ that can be searched for discipline-specific standards and
associated tools.
* The _EUDAT B2SHARE_ tool (see https://trng-b2share.eudat.eu) includes a built-in license wizard that facilitates the selection of an adequate license for research data.
* Useful listings of repositories include:
_Registry of Research Data Repositories_ (see https://www.re3data.org/)
Some repositories like _Zenodo_ an OpenAIRE and CERN collaboration, allow
researchers to deposit
both publications and data, while providing tools to link them.
Other useful tools include _DMP online_ (see https://dmponline.dcc.ac.uk/) and
platforms for making
individual scientific observations available such as ScienceMatters (see
https://www.sciencematters.io/).
# CONCLUSIONS
This Data Management Plan (DMP) is a flexible and living document, that will
be reviewed and updated as needed, as the project proceeds. It has a clear
version number also allows the e.THROUGH project team to adapt to future
developments, especially the lessons learned from the first months of the
project and its initial activities/secondments. As a minimum timetable for
updates, this DMP will be updated in the context of the periodic
evaluations/assessments of the project.
Finally, it is crucial to ensure the sustainability of the Data Management
strategy so that the knowledge developed within the project continues to exist
beyond the life cycle of the project.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
1428_SERENA_779305.md
|
# Chapter 1 Introduction
The Data Management Plan (DMP) outlines how research data will be handled
during a research project and after it is completed. It includes clear
descriptions and rationale for the access regimes that are foreseen for
collected data sets.
This DMP describes the data management life cycle, see Figure 1, for all data
sets that will be collected, processed or generated by the SERENA project.
This is done by outlining how research data will be handled during the SERENA
project, and after the project is completed, describing what data will be
collected, processed or generated and following what methodology and
standards, whether and how this data will be shared and/or made open, and how
it will be curated and preserved.
The DMP is not a fixed document; it evolves and gains more precision and
substance during the lifespan of the SERENA project.
More elaborated versions of the DMP will be delivered at later stages of the
project. The DMP will be updated at least by the mid-term and final review to
fine-tune it to the data generated and the uses identified by the consortium
since not all data or potential uses are clear from the start.
Figure 1 Data Management Lifecycle
The outline of this report is as follows; after the introduction a description
about the data will be given, which includes what type of datasets are
expected to be collected, how the ownership of IP is arranged, how this data
can be accessed and shared, within, as well as outside of the SERENA
consortium. Followed by how the data will be archived after the end of the
project. Finally, an overview will be given of the collected datasets.
# Chapter 2 Data description
A description about the data will be given, which includes what type of
datasets are expected to be collected, how the ownership of IP is arranged,
how this data can be accessed and shared, within, as well as outside of the
SERENA consortium.
## 2.1 Datasets
The different academic and industrial partners will create different kinds of
datasets. We expect datasets of the following types:
* Proof-of-concept specifications
* Economical aspects and impact data
* Market data
* Design data
* Measurement data
* Validation data
* Simulation data
* Algorithm descriptions
* Source code
SERENA will investigate algorithms and architectures for the hybrid analog-
digital beamforming system. These algorithms will be implemented (as source
code) and also tested with the proof-of-concept system. The test with the
proof-of-concept system will generate measurement data, as well as validation
data. Currently, we are investigating on the specifications of our proof-of-
concept platform and beamforming system and comparing them to current state-
of-the-art specifications. The specifications for the SERENA outputs will be
confidential. Deliverables and reports are also part of the produced datasets
and may contain data from any of the above-mentioned datasets.
So far, no specification is set for formats and therefore no special
requirements set how to read and interpret data/source code. However, since
the implementations will be based on well-used software stacks, a common
format shall be realized. Examples of formats used are typically data
dependent. Measurement data will be either made available in a machine-
readable text format (like CSV) or in a binary format compatible with the
software MATLAB. Source code and algorithm descriptions will be in text
format. Part of the data will need Mathworks MATLAB or an equivalent
application to be read. Other necessary information to interpret the data will
be made available as documentation respectively metadata. In specific
situations simulation and measurement data will generally be a collection of
complex variables as function of either stimuli frequency and/or input power.
The data formats will vary but most are in CSV (or similar format) with
descriptive headers. For design data some of the CAD systems store their data
in XML-format using an industry standard format (OpenAccess). For proprietary
data formats it is important that sufficient information is stored in an open
format to make it possible to recreate datasets in future, in SERENA this
especially relates to circuit designs, which may need to be re-simulated in
the future using newer simulation software tools. Thus, it is essential the
circuits and their device details and simulation settings are stored in an
open format.
To read the simulation and measurement data the access to the original files
should be sufficient. To interpret the data, companying design/measurement
reports are necessary as the data files do not provide enough context. To read
and interpret the design data bases the original CAD software must be
available (having exactly the same version used for generating the data). Some
of the data can be stored in industry standard formats (GDS or OpenAccess) but
this is not a guarantee that the data can be read or interpreted in the far
future.
The amount of data cannot yet be estimated as it depends on the investigations
during the project lifetime.
## 2.2 Design data example
Over the length of the project at least three types of data for designs will
be created in the SERENA project. A specific design dataset example is given
here: The first is the design databases for the GaN-on-Si MMICs and the 39 GHz
proof-of-concept demonstrator. These databases are stored in vendor
proprietary formats (Keysight ADS, Ansoft HFSS, and Cadence Allegro/OrCAD).
The second type is simulation data from the electronic CAD tools (Keysight
ADS/Momentum/EMPro and Ansoft HFSS). The simulation data is stored in vendors
specific proprietary formats with the option of exporting to open data formats
or industry standard formats. The third kind of data is measurement data of
the designed GaN-on-Si MMIC, antennas, and proof-of-concept demonstrators.
This data will be stored in partners proprietary databases.
Examples of simulation and measurement data are:
* scattering parameters as function of frequency and/or bias
* output power as function of input power and frequency (and also bias)
* antenna radiation diagrams
* Error Vector Magnitude as function of input power and frequency (and also bias).
Due to the nature of the generated research data (specific to certain circuit
designs and system designs) will limit the value of said data to a wide
audience. The data that is of value will, typically, be published in
scientific journals and at scientific/engineering conferences. These mediums
provide the necessary context to correctly interpret the data.
## 2.3 Intellectual property rights
The owner of the data will be the project partners who act as creators and
contributors. At a later stage plans and or specifications for licensing will
be decided, therefore at this stage of the SERENA project no licensing is
required. One partner expects to make data available, licensed under the
Creative Commons CC BY-NC-SA. Other partners expect to license data according
to their global strategies. At this stage there a no plans on restrictions on
the reuse of third-party data.
Most of the data will be restricted due to confidentiality and or patent
rights. This depends on the exploitation plans of the involved partners and
the corresponding choices as described in the Description of Action (DoA).
Signal processing algorithms and related specifications may be published. Some
data will be made public, in particular when this data accompanies published
results for validation purposes.
## 2.4 Access and sharing
Primarily potential users can find data via the partners websites, the
consortium website, as well as conferences, workshops and the public
deliverables.
Relevant data will be published in repositories, such as Zenodo (
_https://zenodo.org/_ ) . The first dataset is already uploaded there by TUB,
see the table in Chapter 3 on datasets. Datasets will be linked on the project
website. Furthermore, this will partially be done through SERENA public
deliverables. Also, part of the data will be published in open access
publications or presented on conferences and workshops.
There won’t be any restrictions within the SERENA consortium.
Sharing data with the public (e.g. universities, research organisations, SMEs,
industries, etc.) will depend on the dissemination level of the deliverables.
Publications will be open access. Besides that, SERENA will follow and comply
with the FAIR (Findable, Accessible, Interoperable, and Re-usable) regulations
for data within the H2020 data management.
### _2.4.1 Means of internal data sharing_
With the consortium the data will be shared via the project-internal SVN
repository. This repository is restricted to the SERENA consortium. The
partners each have their own internal IT infrastructures with their respective
IT guidelines. The data will be shared within the consortium as soon as the
data is available, to the public after the due date of the deliverable given
that it is public.
Our chosen types, formats and software enable sharing and long-term access of
the data. There are open and freely available alternatives to the often-chosen
software package Matlab, with which the data can be read and interpreted. The
amount of data does not imply any restrictions on the storage, backup and
access. A persistent identifier will be obtained for scientific publications
and their underlying research datasets.
Data will be shared and archived within the project’s SVN repository. Apache
Subversion (often abbreviated as SVN) is a software versioning and revision
control system distributed as open source.
Whenever research data are relevant for the consortium and will be used in
cooperation among several partners, data storage in the project SVN will be
considered. Data that are individually owned and individually used will
preferentially be stored by the owning partner. The SVN allows easy
synchronization of documents between the server (hosted at the premises of the
coordinator TEC in Austria) and a participant’s local file storage for sharing
documents and data. The system includes tools for retrieving older versions of
a particular file, which means that all former versions of a file are
available and reproducible. Therefore, no conventional backup system is
necessary. Regarding the preservation of the data, there is no expiration date
until which the data is available. Even years after the project end the data
will still be available. Hence, long term preservation of the data is secured.
Due to the fact that the system is set up and maintained by the coordinator in
their IT infrastructure in Austria, we have full control over the data at any
time, which is a significant advantage over cloud-based solutions.
Research data submitted to the SERENA SVN is protected by means of mandatory
authorization. Partners, who wish to have access to SVN (both read only and
read/write access) need to obtain user credentials from TEC. A username will
be generated and the user has to define a password. Only with these
credentials, the user get access to any documents on the SVN. All
communication between the clients and the SVN server use SSL encryption.
## 2.5 Archiving and preservation
Preserved data can form the basis for future scientific research work, thus
the data will be retained and preserved for at least 3 years after project
end. Other data, for example data that are stored in public repositories with
a persistent identifier will be preserved much longer. Locally data
preservation will be handled by the individual consortium partner, examples of
archives are the TUB backup system.
We have foreseen to retain data for 3 years after the project end. The
expected costs are those that arise through server provision and maintenance.
For some partners this is standard offered functionality within their
institute or company, for which no extra fee needs to be paid.
## 2.6 Documentation and metadata
Appropriate documentation in the form of public deliverables and/or
publications will be provided. There will also be data specification sheets
for data sets underlying scientific publications. Any additional and helpful
data or metadata will be logged.
There are no special requirements set yet for this additional data or
metadata, however it will be done in a human-readable form. The metadata is
purely descriptive to access the data. As described in this document, SERENA
will comply with the FAIR regulations stated for H2020 projects and their data
management. Therefore, SERENA will follow those rules and ensure that the data
will be **F** indable, **A** ccessible, **I** nteroperable, and **R**
e-usable.
For measurement data the documentation will include a description of the
measurement setup, used devices, and settings / parameters. For source code
the documentation will include the name and version of the software the code
is written for. The documentation will be made available in human readable
text form along the data.
# Chapter 3 Data collection
This chapter will be updated throughout the project, please find descriptions
of the datasets, as well as a tabled overview below.
## 3.1 Dataset 1
Measurement data and processing code for Mathworks Matlab underlying the beam
alignment results of the publication “An Analog Module for Hybrid Massive MIMO
Testbeds Demonstrating Beam Alignment Algorithms”. The results compare three
algorithms for beam alignment and are measured at 2.4 GHz using the Hybrid
Massive MIMO testbed of the CommIT chair of the Technische Universität Berlin.
The measurement data is in the binary mat format and the source code in text
format. The choice is due to the usage of Matlab to generate the data. The
data is available on Zenodo and has a persistent identifier.
In Table 1 we will track our datasets, please find a description of the
content of Table 1 below.
D7.2
–
Data Management Plan
<table>
<tr>
<th>
**Data Nr.**
</th>
<th>
**Responsible**
**Beneficiary**
</th>
<th>
**Data set reference and name and**
**[used methodology]**
</th>
<th>
**Data set description**
</th>
<th>
</th>
<th>
</th>
<th>
**Research data identification**
</th>
<th>
</th> </tr>
<tr>
<th>
**End user (e.g. university, research organization,**
**SME’s, scientific publication)**
</th>
<th>
**Existence of similar data (link, information)**
</th>
<th>
**Possibility for integration and**
**reuse (Y/N) + information**
</th>
<th>
**D 1 **
</th>
<th>
**A 2 **
</th>
<th>
**AI 3 **
</th>
<th>
**U 4 **
</th>
<th>
**I 5 **
</th> </tr>
<tr>
<td>
1
</td>
<td>
TUB
</td>
<td>
DOI:
10.5281/zenodo.121 7533
Title:
Beam Alignment
Measurements using a Hybrid Massive
MIMO Testbed
</td>
<td>
Researchers (university and others)
</td>
<td>
</td>
<td>
Y
</td>
<td>
X
</td>
<td>
X
</td>
<td>
X
</td>
<td>
X
</td>
<td>
X
</td> </tr>
<tr>
<td>
2
</td>
<td>
</td>
<td>
[please fill-in]
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
3
</td>
<td>
</td>
<td>
[please fill-in]
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
4
</td>
<td>
</td>
<td>
[please fill-in]
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
5
</td>
<td>
</td>
<td>
[please fill-in]
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
6
</td>
<td>
</td>
<td>
[please fill-in]
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
Table 1: Data Overview
1. Discoverable
2. Accessible
3. Assessable and intelligible
4. Usable beyond the original purpose of which it was collected
5. Interoperable to specific quality standards
SERENA D7.2 Public Page 7 of 10
### Responsible Beneficiary
• Who owns the data?
**_Data set reference and name and [used methodology]:_ **
Identifier for the data set to be produced, as well it’s used methodology
**_Data set description:_ **
Description of the data that will be generated or collected, its origin (in
case it is collected), nature and scale to whom it could be useful, and
whether it underpins a scientific publication. Information on the existence
(or not) of similar data and the possibilities for integration of reuse.
### Research Data Identification
The boxes (D, A, AI, U and I) symbolize a set of questions that should be
clarified for all datasets produced in this project.
**Discoverable:**
Are the data and associated software produced and/or used in the project
discoverable (and readily located), identifiable by means of a standard
identification mechanism (e.g. Digital Object Identifier) **Accessible:**
Are the data and associated software produced and/or used in the project
accessible and in what modalities, scope, licenses (e.g. licencing framework
for research and education, embargo periods, commercial exploitation, etc.)
**Assessable and intelligible:**
Are the data and associated software produced and/or used in the project
assessable for and intelligible to third parties in contexts such as
scientific scrutiny and peer review (e.g. are the minimal datasets handled
together with scientific papers for the purpose of peer review, are data is
provided in a way that judgements can be made about reliability and the
competence of those who created them)?
#### Useable beyond the original purpose for which it was collected
Are the data and associated software produced and/or used in the project
usable by third parties even long time after the collection of the data (e.g.
is the data safely stored in certified repositories for long term preservation
and curation; is it stored together with the minimum software, metadata and
documentation to make it useful; is the data useful for the wider public needs
and usable for the likely purposes of non-specialists)?
#### Interoperable to specific quality standards
Are the data and associated software produced and/or used in the project
interoperable allowing data exchange between researchers, institutions,
organisations, countries, etc. (e.g. adhering to standards for data
annotation, data exchange, compliant with available software applications, and
allowing re-combinations with different datasets from different origins?)
It is recommended to make a “X” to each applicable box and explain it
literally in more detail afterwards.
Public Page 8 of 10
# Chapter 4 Summary and conclusion
This DMP outlines how research data will be handled during the SERENA research
project and after its completion. The DMP is not a fixed document; it evolves
and gains more precision and substance during the lifespan of the SERENA
project.
Within the SERENA project several different datasets are expected to be
created. As discussed, these datasets will be stored and archived by the
individual partners on their local IT infrastructure. Shared data will be
stored and archived on the project closed repository, managed by TEC. Public
data will be stored in public repositories, with a persistent identifier. All
these datasets will be accompanied by descriptive metadata, deliverables,
presentations, and publications. All datasets will be stored for at least
three years, datasets at partners may be stored for much longer. Public
datasets will be provided with a persistent identifier and may exist
indefinitely.
One of the key points identified in this document is that where the use of
proprietary formats is unavoidable, it is important to store sufficient
information to at a later stage recreate, especially, simulation data. This
DMP outlined what information is sufficient for this purpose.
The Consortium Agreement (CA) forms the legal basis on how to handle issues
related to IP and has defined rules with relation to dissemination and
publications. The SERENA consortium is convinced this DMP will ensure proper
handling according to the open data pilot, during and after the duration of
the SERENA project.
Public Page 9 of 10
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.